Archive for the ‘Sonian’ Category

A 2007 Multi-Cloud Fantasy Becomes a 2012 Reality

Five years ago I wrote a business plan that described an archiving SaaS project built on cloud computing. In 2007 that was an uphill battle to convince prospective investors “the cloud was the future.” And at that time there was really only one cloud, from the e-commerce giant Amazon. Amazon Web Services really started the modern cloud movement. No existing IT provider (IBM, HP, Microsoft, Dell, etc.) would have had the gusto to upset their current business model with a “disruptively priced” cloud option. For the past four years those IT giants fought the cloud momentum until they had a credible cloud themselves. But for a lean start-up getting funded five years ago, it wasn’t a stretch to assume other clouds would appear to take on Amazon.

The graphic above was my crude way to visualize how a cloud-powered digital archive, anticipating someday living on multiple clouds, could in essence become a “cloud of clouds.” A lot of positive breakthroughs would need to occur to be able to successfully operate a single reference architecture software stack across more than one cloud. There was no terminology to describe this desire. We weren’t using terms like “Big Data” or “DevOps” nor many of the acronyms that today are common lingo in our modern cloud-enabled world. The business plan depicted a system designed to manage lots of data, and being an enterprise document archive, the data itself was large in size and numerous in quantity. We probably started one of the worlds first cloud-big-data projects.

In the beginning the multi-cloud goal was a fantasy dream, a placeholder for a future that seemed possible, but the actual crawl, walk, run steps not precisely defined because we didn’t yet know “what we didn’t know.”

So why in 2008 were we thinking about “multi-cloud?” The answer is we wanted to avoid single vendor lock-in and maintain a modicum of control over our infrastructure costs. The notion of an evolving multi-cloud strategy meant the ability to seek lowest cost of goods from multiple cloud vendors. In the pre-cloud IT world, when services were built on actual hardware, pricing flexibility was derived by negotiating better deals with hardware vendors. The customers didn’t know or care that their SaaS app might be powered by HP sever one day or a Dell 1U box the next. Those decisions were up to the discretion of the SaaS provider to get the best infrastructure value by shopping vendors. But in a single cloud, when there is only one choice, there’s no ability to negotiate between multiple vendors, unless you have multi-cloud dexterity.

Multi-cloud capable means the necessary infrastructure and abstraction layer is available to run a single common reference architecture on different clouds at the same time, with one master operator console. Multi-cloud is almost like, but not exactly, the concept of running a common program across IBM, DEC, Control Data mainframes. The clouds today somewhat resemble massive time-sharing mainframes of the previous decades.

Our early start five years ago, and all the hard lessons learned since, allows us to easily assume a commanding position in multi-cloud deployments. Engineering teams just now starting their “cloud journeys” will learn from us pioneers, but there is an old saying; “until you’ve walked a mile in my shoes, don’t claim to know anything otherwise.”

Read more…

AWS EC2 Fleet Upgrade Tests our “Cloud Abilities”

This is an essay that was published to the Sonian cloud compute blog. Cross posting here for this audience.

In the past I have written about the secret to successful cloud deployments and how to architect for the cloud. Being successful requires a “designed-for-the-cloud” architecture, best operational practices and DevOps on steroids.

A couple weeks ago Amazon notified a majority of their customers about an upcoming event that us early-to-the-cloud pioneers hadn’t seen before; a forced reboot of the host operating system. On a massive scale. For Sonian, 72% of our currently running EC2 instances will need to be restarted before Amazon’s deadline. There is no reprieve. There is no deferment. Welcome to Infrastructure as a Service!

Our AWS business development contact gave us an early heads-up, and Twitter lit up when the first email notices started to arrive for the US-West region. Something big was afoot. And a lot of groans from the EC2 user community. First let me state flat out that Amazon did a pretty good job getting the word out and provided several methods to know which EC2 instances would need to be restarted. An email was sent with the list, the EC2 Management Console displays the information, and the EC2 API ‘Ec2-describe-instancestatus’ field has the information. Fortunately Joe Kinsella (@joekinsella) enhanced our Cloud Control Viewer and provided a report showing the exact instances and their reboot schedule.
Of the various reboot types, the most invasive is the one that moves the virtual host to new hardware. That will force a change in IP address and ephemeral storage is lost. This activity will certainly shake out any bugs in automated deployments, hard-coded settings, and sloppy shortcuts.

We had to scramble in order to assess the impact. All we learned from the email notice was that a portion of our EC2 instances would need to be restarted. Actually there were two types of restarts. An operating system reboot, which would preserve the non-persistent ephemeral storage, and a more invasive full instance restart (meaning the hardware under the hypervisor would power-cycle) which would not preserve the ephemeral storage.

One of the major mistakes cloud customers can make is to get complacent and treat the cloud like traditional co-located hosting. The cloud has different operating characteristics, what one could call the “cloud laws of physics,” and this forced restart is a good example of this principle in action. It’s also a wake up call to not get lazy. A large scale forced restart is like an earthquake drill. Practice makes perfect, and if this were an actual un-scheduled emergency, then we would be scrambling.

Despite the headache, this event has some positive spins. First it’s encouraging there is an “EC2 fleet upgrade.” This means newer underlying hardware. Perhaps faster NIC cards in the hosts. But for the companies like Sonian that started in the cloud circa 2007, some of our original instances that have been running for more than a year needed a “freshening.” This event reminds us there is a “hardware” center to every amorphous cloud. Amazon just does a great job to allow us to not have to think about that too often, except for times like these. A stale part of the cloud gets a refresh.

The second “benefit” is the forced fire drill. I know, there’s never a good time for the fire drill. But this type of event has similar qualities to an unexpected outage. There is some luxury to pre-planning, but the shake-out will be the same. Something will be discovered in your architecture or deployment practices that will get improved by this reboot activity. Clusters may be too hard-coded. Config settings may be to restrictive. Reboot scripts may not work as you think.

Sonian survives unscathed due to our maniacal focus on 100% automated deployments, 100% commitment to “infrastructure as code,” and an investment in cloud control tools that allowed us to triage the situation and develop an action plan relatively quickly. We also employ the best darn DevOps team the cloud has seen.

Cloud Innovation Acceleration Effect: Now Releasing 100 Stories

Cross-posting here a two part essay I wrote for the Sonian blog on how Sonian is benefiting from, and contributing to (by amplification,) the innovation cadence in cloud computing.

I’ve been working in enterprise software since the late 1980′s, and what I am witnessing as a participant in “the cloud” is the pace of cloud technology innovation over the past five years blows away the previous two decades.

There is a real noticeable trend here. We didn’t see this in SaaS powered by co-location hosting. What we are seeing with the cloud, and the ISV’s that adopted the cloud five years ago, is truly amazing. Sonian is entering a release cadence updating production systems with substantial new features every month.

Cloud Innovation – Part 1

  • Innovation history of Amazon Web Services 2005-2007
  • How Sonian amplifies cloud innovation
  • Sonian as an example of the “perfect” cloud ISV

Cloud Innovation – Part 2

  • Innovation history of Amazon Web Services 2008-2011
  • Comments about Gov Cloud

 

 

 

Designed for Now

This Tweet came across one of my listening posts:

>> RT Sheth: average business software user is looking at something designed 7-8 years ago

Got me to thinking “that statement is so true!”

With the world seemingly “falling apart” around us, the essence of Cloud Computing, and the applications created specifically for the cloud, are the breath of fresh air that enterprise IT needs to succeed for the next decade.

Doing more with less. “Zero Friction” service enablement. Pay as you go. Really good reliability and security.

These are the primary themes behind cloud computing. Sure there are a lot of haters, they just haven’t seen the light, or are consumed with fear, uncertainty and doubt as the cloud threatens their comfort zone.

Peace out.

Pay it Forward: Sonian Releases Sensu – Open Source Monitoring for Cloud

In the theme of “pay it forward,” we’re giving back to the open source community. Today Sonian is releasing our Sensu monitoring framework (via GitHub). Read Sean Porter’s Sensu post for more details on this exciting and timely project. It’s timely because nature of distributed cloud-based SaaS systems requires new ideas for application monitoring and reporting. Sensu is a new approach, designed for cloud computing environments, distributed systems and dynamic applications.

Follow the Sensu IRC chat here:  irc.freenode.net #sensu

Cloud-powered “Time Machine” Creates Corporate Timelines

With much fanfare, Facebook announced a new “Timeline” feature at its F8 developer conference this week. This feature takes advantage of the enormous amount of information (photos, status updates, location) we all store in Facebook. The Timeline is accessible to Facebook application developers as well as the half a billion folks who use the the social network. With increased competitive pressure from Google+ and Twitter, Timeline will be an important differentiators between Facebook and their competition. Timeline also shows there is an interest in melding the past with the present. Timeline wouldn’t have been possible or relevant until Facebook achieved significant adoption and large amounts of data under management. The “network effect” of big data stored in one cloud-computing environment allows Facebook to have unique, unparalleled access to information never before possible with any other online system. Perhaps only AOL or CompuServe had this opportunity, but they didn’t have “the cloud” or sophisticated tools like Hadoop or NoSQL to make their data useful.

(As an aside, I expect some Facebook users might be startled with a “creepiness” factor when they see their Timeline presented back to them. Facebook will have the unique ability to remind us with a visceral visual recollection of past people, places and events.)

What’s interesting about Timeline is the way events, photos, postings, and news-feeds are visually presented. There is a lot of machine learning computational effort required behind the scenes to create relevant and compelling time-lines for 5ivehundred million accounts. This is an example of cloud computing, big data, analytics and creating a pleasing consumer experience.

Read more…

Cloud Eyes Wide Open

The oft-used axiom “hindsight is twenty-twenty” is proven once again . With 20/20 vision looking back over the previous four years, 2008 through 2011, perspectives on cloud computing come into sharper focus. There is no question “cloud computing” is a revolutionary advance in the way businesses and consumers utilize computing resources. And all the previous technology “revolutions” that preceded the cloud were required in order to make cloud computing a reality. By previous revolutions I mean the Internet/Web, open source, cheap and reliable bandwidth, and commodity priced hardware. If back in 2004 one had an astute crystal ball peering forward into 2012, the natural leading edge thinking would have seen that cloud computing was the inevitable conclusion of all the afore mentioned “computing revolutions.” Plus the crystal ball would have given us a glimpse of Amazon’s Jeff Bezos and Werner Vogels plotting their disruptive cloud mission.

In 2008 SaaS startups were “glamoured” by the cloud. Especially by startups where founders had previously created SaaS applications that required building dedicated hosting infrastructures at great expense and distraction. The cloud looked to be a panacea for all the hassles of operating a data center. Need more compute, just make an API call and a new virtual computer comes to life. Need more storage, make an API call and terabytes of quality file systems are yours for as long as you need them. But what was unknown about the cloud, and not clearly visible in the 2008 crystal ball peering into 2011, is that the easy parts of the cloud started to work against our collective best interests. With dedicated hosting, the simple act of “adding more infrastructure” had an established  purchasing approval workflow; budget with the CFO, negotiate price with the vendors, track UPS shipments, and pay an invoice 30 days later. That’s a lot of friction in a fast paced environment, but the purchasing controls (in hindsight) created an accountability layer the cloud lacked.

For the Sonian project, we needed to create purpose built tools to help manage costs and reduce complexity. Some of these tools will be open sourced in a “pay it forward” contribution to the community. Adding to this, the industry is starting to see startups emerge that focus on cloud management systems. The cloud solves many pain points, but the cloud itself has pain points too. The eco-system around cloud computing is innovating quickly and it will be exciting to see what comes next.

Despite the learning curve, mastering the cloud for the right use case is worthy of any and all efforts. The cloud, combined with the right use case, and now the right tools, puts all the right incentives in place to deliver customers “more for less.” Who doesn’t want that in this current economic climate?

Happy Fifth Birthday AWS EC2

 

A Haiku for Amazon’s Elastic Compute Cloud Fifth Birthday

Happy Fifth Birthday
Cloud Computing Now For Real
Began with You, EC2

Wow… Amazon Web Service’s cloud computing platform EC2 is 5 years old today. That’s a whole lot of innovation in just half a decade. Where did the time go? Seems like just yesterday the founding engineering team at Sonian was dipping a toe in the cloud-computing-waters (or should that be sticking a “finger in the air?”) proving out the concept of building an enterprise-focused SaaS solution powered solely by Amazon S3 and EC2.

The idea of virtually unlimited compute processing for ten cents an hour with no up-front capital expenditures was too alluring to ignore. Plus Amazon S3 storage was the most reliable storage available at fifteen cents a gigabyte per month. Amazon’s cloud offering created a level playing field, allowing young, small start-up teams to compete with likes of Google in terms of solving a big problem for a big audience without needing to have the bank account of a big company. (The big bank accounts can come later.)

In late Summer 2006-era thinking it certainly was a leap to think that a business could trust their compute and storage infrastructure needs to an online e-commerce vendor (Heard on the street: “What does Amazon know about hosting compute and storage?” …. Turns out quite a bit, thank you). I recall circulating the original Sonian business plan to local investors and getting a lot of negative feedback about using Amazon. Fast forward to 2011, investors now demand that their start-ups use AWS to be capital efficient during the formative technology building stages.

The attitudes toward cloud computing in general, and Amazon Web Services specifically, have made a full 180 shift from the skeptical minds five years ago. As Jeff Bezos said recently, “you have to be prepared to be misunderstood for a long time” in order to prevail. As is the case in times when there is rapid change (like the advent of cloud computing,) eventually the best way forward emerges and gains the respect of the masses.

One can only imagine the innovations Amazon EC2 will show us on it’s tenth birthday.
Image courtesy ImageChef
This post also available on the Sonian blog

Abundant Innovation – Sonian Summer 2011 CodeFest Delivers Impressive Results

The first quarterly all-engineering code fest completed Tuesday (Aug 16, 2011) evening with 3 winning teams, one dramatic performance, and many laughs.

This post is linked to the Sonian Blog. Joe Kinsella, Sonian VP Engineering, wrote about the CodeFest here.

The entire company was invited to view the presentations and vote for their favorites. The only voting rule was you can’t vote for your own team. The judging was based on three criteria: 1. Impact on solving a Sonian or customer pain point (50%), 2. “Cool-ness” factor (25%), and 3. Presentation style and effectiveness to convey the idea (25%).

Thirteen teams competed, representing the four functional units in the Sonian Engineering organization; SAFE (back-end), Website (front-end), DevOps (systems management) and QA. There were several teams from each group. The themes each team chose ranged from automation, performance measurement, to UI beautification and speed. Each team gravitated toward their “natural” inclinations. The DevOps teams focused on automating manual tasks and removing friction from deployments. The SAFE team (back-end) showcased applying “math” to measuring performance and data classification. The website team looked at speed and a better user experience, and the QA team showed us new ways to think about cost-testing alongside bug testing.

Six teams had a metrics or analytics theme. Two teams focused on user interface improvements, and 4 teams came up with solutions for automation and deployment problems.

Instead of Ernst and Young tallying the votes, our Harvard MBA trained ROI analyst Chris H. stepped in to ensure a fair and accurate accounting.

And thanks to all the non-technical folks who sat patiently through presentations where terms like “latency,” “lazy loading,” “grepping logs” and “foreground queues” were discussed.

Teams chose their presentation order, and the QA team volunteered to go first. Below is an accounting of each presentation with some context on how the idea fits into Sonian’s needs and long-term vision.

Congratulations to all the teams who competed! The next CodeFest is sure to be another interesting event.

Team 1: “You paid what for that …. Export job, Object list request, or ES cluster?”

Andrea, Gopal, Bryan and Jitesh from the quality assurance team got together around an idea to extend testing methodologies into infrastructure cost analysis. In order to maximize the cloud’s economic advantage, the engineering team is always thinking about the cost of software operating a “big data scale” levels of activity. From architecture to implementation, the goal is to infuse “cost conscious” at every level. The QA team came up with a novel idea on this theme.

The proposed idea is to extend the testing framework to set a baseline of feature infrastructure costs, and then measure successive releases against the baseline. A significant cost deviation from the baseline could be considered a design flaw, implementation error or a SEV1 bug. Some sample features with measurable costs would be an import job, export request, or a re-index. Over time the entire app suite could have an expense profile established.

Having QA be an additional “cost analysis layer” in the full development cycle will only help make the Sonian software as efficient as possible.

Bonus points to the team for the most elaborate props and “dramatic performance” used in their presentation.

Read on for details on the twelve other teams

Read more…

Security in the Big Data Cloud

(ed. A version of this post appears at the Sonian Big Data Cloud blog)

A cloud software company’s worst nightmare came true for Dropbox this past weekend when a software bug allowed anyone to login to an account (over a four hour time period) using any password. It’s unknown if or how many accounts were accessed inappropriately. So far there are no reports of data breaches.

This recent occurrence, coupled with other non-cloud, but seemingly similar themed data breaches as reported by Citi Bank, Sony and LulzSec, has moved the “can the cloud be secure” conversation into the spotlight. The short answer is yes, the cloud is secure, and here is why.

Defining Cloud Security

Data security in the cloud is a combination of “inherited responsibilities” between the cloud infrastructure provider (Amazon, Rackspace, Softlayer, etc.) and the independent software vendor (an ISV, i.e. Dropbox), and the customer.
Data security in the cloud is really two components: resiliency and privacy. Resiliency means when a customer stores data in the cloud, the cloud vendor should not lose that data. Privacy means nobody but the customer should be able to “see” the data stored in the cloud.

The cloud vendor is responsible for data resiliency. Cloud vendors provide Service Level Agreements (SLA) that provide a measure of resiliency so that customers can compare one cloud versus another. For example Amazon Web Services provides a “eleven-nines” of cloud storage resiliency, while SoftLayer offers “five-nines.” These SLAs are far better than what a typical enterprise can achieve in their own data center.

Read more…