Archive for the ‘Amazon Web Services’ Category

Why I am excited about AWS Lambda

For me, learning about AWS Lambda was the most exciting AWS re:Invent announcement in 2014. Lambda is the back-end compliment to the front-end AWS Javascript API SDK released a year ago. During the 2013 AWS re:Invent Werner Vogles said onstage the AWS JS API SDK was one of the more exciting announcements in 2013 because it made possible single page web apps to run directly on S3 (and DynamoDB, etc.) without the need for EC2. Lambda completes the big picture.

What is AWS Lambda?

AWS Lambda is a new cloud service from AWS that completely abstracts the infrastructure needed to run your code. It’s the further manifestation of “let someone else handle the undifferentiated heavy lifting”. You write your code (Javascript for now) and AWS Lambda runs it in a dedicated virtual language runtime. You do not have to think about provisioning, sizing & monitoring EC2 nodes. 7 years ago the introduction of EC2 was revolutionary because it abstracted application stacks from data center provisioning & operations, and now Lambda leaps forward to offer “infrastructure-less” environments (sure… there are servers running “somewhere,” its just you don’t have to think about them!)

You only pay for execution time. Right now $0.00001667 for every GB-second used and $0.20 per 1 million requests thereafter ($0.0000002 per request). In typical AWS fashion the pricing model is multi-dimensional. You pay for requests to run your Lambda function and duration. Duration is a price based on time and GB memory allocated. Memory allocations range from 128 MB to 1 GB.

Why is AWS Lambda so exciting?

Lambda is in essence a return to massive shared compute environments, but with a modern twist. I learned to program on a CDC Control Data mainframe and IBM System/36. Each of these environments supported the concept of multiple jobs running at the same time, with a “human” scheduler coordinating start, stop, error recovery, duration and resource allocation. As a student, I had to “schedule” my homework assignments with the CDC controller, which often meant late nights in the computer lab. AWS Lambda is all this, but via API and you get to be the controller.

AWS Lambda turns EC2, S3, SNS, DynamoDB & Kinesis (and soon all AWS services) into a collection of shared resources you harness to both trigger your application or be the recipient of your application’s computational output. A classic example is an image file is uploaded to S3 that needs to be resized. With Lambda, upon the POST, you can have S3 trigger Lambda to run your Javascript resize functions. The input is the original image upload and the output are the resized image objects, stored back to S3. The output can also be further triggers to other AWS services, like updating DynamoDB or sending an SNS alert.

Virtual machines leapfrogged bare hardware
EC2 leapfrogged virtual machines
Containers leapfrogged EC2
Lambda leapfrogs VM, EC2 and containers

Each progression was toward ever more efficient utilization of the underlying physical resource (i.e. a host computer.) At the end of the day all these virtual services consume physical resources (metal, electricity, real estate) and Lambda is the next step toward extreme efficiency on the backend.

Read more…

AWS re:Invent Keynote Summaries

aws-reinventKeynote 1 – Andy Jassy

TL;DR summary:

Andy Jassy announced 8 new services in the opening keynote. The AWS SVP said “the new normal is that security and compliance are becoming reasons customers are moving to the cloud.” It used to be security and compliance were reasons to stay out of the cloud. Complete 180 in prior 5 years.

Theme: AWS has already won the hearts and minds of startups and ISVs

New focus… What does the “enterprise” need to embrace the cloud?

Day 1 Keynote revealed these 8 new enterprise focused features

  • Amazon Aurora

    • Super fast, resilient SQL as a service. $.29 cents an hour. This is an attack on Oracle. 11’9s reliable, 5x faster than existing MySQL on AWS.

  • AWS Key Management Service

    • Multi-faceted fully managed encryption key service with automatic key rotation, auditing, and compliance certifications.

  • AWS Config

    • Track and “scenarioize” config changes before implementing. Very important for enterprises which are accustomed to on-premises ITIL tools and need cloud equivalents.

  • AWS CodeDeploy, CodeCommit and CodePipelines & Apollo

    • A suite of internal tools AWS has been using for 19 years now available for free to help enterprise developers more efficiently adopt new innovation practices and increase agility. “Agility” was a theme mentioned over and over. Cloud is more than low cost, it’s an agility amplifier.

  • AWS ServiceCatalog

    • Enterprises want to publish internal catalogs of approved IT cloud services and want internal groups to be self-sufficient. This service helps in that area.

    • Think of this as the “corporate portal to cloud.”

  • Amazon / AWS  Core Values (a couple of the dozen) cited in keynote:

    • Work backward from customer… and really mean it… don’t pay lip service to this mantra

      • This also means basically ignore competitors, unless customers tell you they need something a competitor is already doing.

    • Pioneer & Invent new technologies for the long-term

      • Legacy IT vendors have lost their innovation gene… AWS fills the void.

Read more…

Is it time for “Installed” Software as a Service?

oxymoron2

A 3 minute read.

“Installed Software as a Service” sounds like an oxymoron. But it’s actually starting to happen and will accelerate even more.

Most enterprises embrace Software as a Service as their preferred method to solve an IT problem. Whether archiving (Sonian), CRM (sf.com), marketing (Hubspot), customer service (Zendesk) or accounting (Netsuite) there is a SaaS offering for nearly every need. It seems only the largest or most security sensitive organizations are not using SaaS. “Installed” SaaS is a delivery method that will make everyone happy about SaaS.

SaaS architectures are designed around massive multi-tenant services with appropriate per tenant (i.e. a customer) security. Multi-tenancy is for economies of scale (we all love SaaS’ low pricing, right?), but less desirable for the customer since data is commingled. Customers desire the managed aspect of of SaaS, but also want to control their own data. Managed by others with data control was a feat too hard or too expensive to accomplish, until now. Technology and market demand are aligning to give customers what they want.

The next wave of SaaS will provide the cost efficiencies of multi-tenancy with the security posture of single tenant. SaaS vendors will offer customers to “install” the service into the customer’s own cloud account. The SaaS vendor will still manage the software, but the customer will have ultimate control over the data structures. Costs will be higher for this type of offering, but customers are willing to pay more for their own control, and will still cost less than traditional on-premises self-managed.

How is “Installed SaaS” possible?

Three emerging technology trends make installed SaaS possible.

The first is the significant amount of devops automation that has matured over the prior seven years. Small teams are using mature tools and processes to fully automate cloud provisioning and software installation to manage massive multi-tenant stacks. This same tooling can manage many single tenant stacks with similar efficiency. Not as efficient as fully multi-tenant, but pretty close.

The second is technologies such as Docker (and containers in general) as well as new cloud capabilities from Amazon Web Services (and others following quickly) such as VPC, Encryption Key Management, Identity and Access Manager & Cloud Native Directory Services. These are all the ingredients SaaS vendors need to “install” into a customer’s cloud account. And now with well documented information security boundaries. With this configuration customers can have a “master” kill switch to cut off external access to their data files. CIOs love this idea.

The third is a new breed of third-party services that can independently “audit” a cloud environment for compliance, security and access. Projects such as Conjur are working on this. Another innovative project is CloudHealth which can monitor cost efficiencies for many single tenant installations and provide automatic cloud infrastructure optimization.

SaaS vendors will need to modify their stack architectures to deliver “installed” SaaS, so there needs to be customer demand to justify the expense. Customers are just now starting to ask for this operating mode.

Read more…

5 Key Takeaways about Amazon Zocalo

zocalo “Zocalo,” what a strange name for a document and file sharing service targeted toward enterprises. My first thought upon hearing the name (while viewing the AWS NYC Summit live stream) was that Amazon had acquired Zoho.com, a competent but not well-known document and collaboration service. But Zocalo looks like organic AWS development and I’m excited to test drive the service.

The name Zocalo sounds exotic compared to the standard AWS naming scheme… We’re used to services with three letter acronyms like “SDS – Simple Document Sharing,” but more recently Amazon’s naming scheme is embracing whole words to define a business service as opposed to a three letter acronym for a developer-focused service. “EC2” is for techies, “Redshift” is for data analysts, and now “Zocalo” is for business knowledge workers.

A Google search reveals “Zocalo” is the name of the big public square in Mexico City. I guess “public square” and document sharing are kindred themes in a holistic way.

1. The Big Picture about Zocalo and who gets disrupted

Zocalo shows AWS is interested in expanding into general “bread and butter” IT services. It’s a natural progression from the original IaaS building blocks, and many pundits have speculated AWS will eventually move into the application space. Ok… so now they have in a big way and they are solving a very horizontal problem; file share, sync and collaboration for the masses.

Zocalo requires an AWS account and is managed from the AWS Console. An IT person deploying Zocalo will be exposed to all the AWS services and this will drive growth in their other cloud offerings. “What’s Workspaces… take the remote desktop for a test drive. Easy”

It’s not a stretch of the imagination to think Zocalo will me marketed on the main Amazon.com e-commerce  site, alongside boxed Microsoft Office and other information management software.

2. Pricing model

Pricing is simple and predictable. Two themes enterprise IT are demanding from their vendors. The base fee is $5 per employee per month which includes 200 GB of storage per employee. The customer can add additional storage for a very reasonable fee. Starting at 3 cents per gigabyte for up to 1 TB and as more storage is consumed the unit price decreases just like with S3.

Customers using AWS Workspaces receive a discount on Zocalo. Workspaces and Zocalo look to be great complimentary offerings. Each alone are a great value, and I can see how Workspaces will be easier to use with Zocalo integration.

Read more…

It’s an Amazon Web Services World After All

truman-showA 2 minute read.

“It’s an AWS world, and we’re just living in it.”

Reflecting upon the first full day of AWS re:Invent 2013 my mind is spinning with conflicting thoughts. I’m trying to grasp the significance of what AWS is today and more importantly what it will be in the coming years. And also trying to anticipate where to focus innovation for the areas AWS will not venture into.

My DNA is systems management and creating solutions to fix IT pain points. But in AWS the opportunity to “solve the cloud’s” management problems is getting smaller with every new AWS release. I cheer the pace of innovation, until it commoditizes my offering.

My colleague Jenn McAuliffe used this analogy to describe her feelings about AWS. She said “We’re all in the Truman Show.” A completely enclosed world where all our needs are met in nearly nirvana-ish fashion. And the controlling forces outside this perfect world are studying the inhabitants behavior. This may sound far fetched… but the sentiment registered with me. AWS is so far ahead of every other cloud infrastructure, at some point startups and enterprises will need to decide whether they double-down on AWS and lock-in, or choose another path of least common denominator subsistence on an IaaS-only cloud.

So back to “where to innovate” in an AWS world? Focus on the application layer and serving audiences where you have a command of their domain expertise. Maybe you want to scratch your own itch, just as long as it’s not core cloud building block deficiencies. AWS will chip away at those, filing down the rough edges, more quickly than we witnessed in the Microsoft dominance era, or the Novell era before that.

It’s not your father’s enterprise IT anymore.

Cloud Cost Savings In Action

This morning Amazon Web Services notified its cloud customers a new CPU configuration is available in all regions. This new virtual CPU type is hi1.4xlarge, and is significant in a number of ways. Amazon heard from customer a high I/O, low latency configuration would be ideal for applications like relational and NoSQL databases. It’s also the first EC2 instance type to use SSD storage. Netflix, like Sonian, a beacon of cloud success, has already shared a great benchmark study showing how this new instance will improve performance and lower costs.

Wow… more performance… and lower costs. This trend tracks back to a previous post I wrote about active and passive cloud cost savings. The introduction of this new instance type creates an “optimization opportunity.” If we cloud customers are willing to invest engineering resources to optimize our software around a new instance type, that is an example of “active savings.” We have to apply effort to realize a cost reduction. On the other hand, if AWS simply lowers the price of an existing instance type, that is an example of passive savings. Just happens automatically.

This is the cloud’s grand bargain. Cost efficiencies flow from infrastructure provider, through the application layer, to the end customer.

The Cheap Cloud versus The Reliable Cloud

5 Lessons Learned from June 29 2012 AWS Outage

Discussing a difficult situation is never fun, and I have been wrestling with how to start this post. It’s about revealing unpleasant cloud truths. And not necessarily the truths you might be expecting to hear. I am not here to preach, but my message to you is important. For the past five years I have been working on a project that uses the cloud to it’s fullest potential, celebrating the victories and learning from the defeats.

I’m speaking to my fellow Amazon cloud citizens. My co-tenants, if you will, in the “Big House of Amazon.” We’re all living together in this man-created universe with its own version of “Newtonian Laws” and “Adam Smith” economics. 99.99% of the time all is well… until out of the blue it’s not, and chaos upends polite cloud society.

If you lost data or sustained painful hours of application downtime during Amazon’s June 29 US-East outage, then you can only wag your finger in blame while looking in the mirror.

I know, I know, the cloud is supposed to be cheap AND reliable. We’ve been telling ourselves that since 2007. But this latest outage is an important wake up call: we’re living in a false cloud reality.

Lesson 1: Follow the Cloud Rules

Up front, you were told the “rules of the cloud”:

  • Expect failure on every transaction
  • Backup or replicate your data to other intra-cloud locations
  • Buy an “insurance policy” for worst case scenarios

These rules fly against the popular notion that the cloud is “cheaper” than do-it-yourself hosting.

There is a silver lining to this dark cloud event. Everyone in the cloud will learn and improve so we don’t have to repeat this episode ever again.

Read more…

Reflecting on One Year of Cloud Cost Optimization

For the past year I held the unelected position of “Cloud Cost Czar.” I have written about the duties such a role entails in A Day in the Life of a Cloud Cost Czar. Recently I handed over the cost czar responsibility to a colleague who will carry on the daily routines and continue to improve our cloud cost management endeavors. In the handoff process, almost a year to the day of assuming the czar’s responsibilities,  I reflected on the previous twelve months and all the accomplishments the company made as a united team to “tame the cloud.”

I created a graph to visualize the dramatic change over one calendar year. To the right is an area graph that shows subscriber seats (in green) overlaid on subscriber costs (blue, orange and red; our principle costs are cloud compute and two types of cloud storage.)  As subscriber growth increased, costs went up, peaked, and then went down over the course of one year. The rise, peak, and subsequent decline all map to various cost cutting efforts initiated by Sonian engineering and support groups.

Throughout the year we got smarter on how to “purchase” compute time for less than retail, how to store more customer data while consuming less cloud storage, and how to process more customer data using fewer CPU hours. In the cloud, we re-affirmed with a high-five on each improvement, we were in control of our cost destiny. This is when the phrase “infrastructure as code” really means something.

Read more…

Comparing 6 Cloud App Marketplaces

Enterprise application marketplaces are sprouting up like Spring-time Daffodils. The latest entrant is Amazon Web Services’ AWS Marketplace. Amazon the e-tailer is no stranger to broad e-commerce initiatives, having conquered books, home goods, electronics, digital media and most recently mobile. (Aside: All indications show Amazon’s new Android Marketplace is off to a great start after a somewhat lukewarm industry reception.)

Many of the newest cloud apps are launched in the AWS cloud. AWS has done a great job courting startups onto their cloud platform. With the AWS Marketplace, Amazon is helping its customers be more successful by giving visibility to both small and large companies who choose AWS for their cloud infrastructure. The AWS Marketplace will also further cement customers into the AWS cloud, since Marketplace participation requires an AWS account. You can’t sell a non-AWS hosted application in the AWS Marketplace. Recently AWS has been publicly advocating the idea of “take your data/app” with you, but in reality moving a complicated SaaS application with a large data footprint from one cloud to another is no small feat. The AWS Marketplace is one more glue point between ISV and AWS.

Apple’s extremely successful iOS App Store, along with iTunes, paved the way for the current marketplaces targeting enterprise customers. Salesforce.com is the poster child for business application marketplace success.

I found six “cloud” themed business oriented marketplaces which are described below in alphabetical order. Across these six marketplaces we do see a recurring theme: marketplaces are tied to their underlying technical platforms, and there are none that support a “cross platform” environment. Google, Box and Salesforce each allow the others to sell into their customer base, but all require a technical hook into an API or account.

  • AWS Marketplace
  • Box.com OneBox
  • Chrome Web Store
  • Google Apps Marketplace
  • Salesforce.com AppExchange
  • Zoho

1. AWS Marketplace

What is it?

The AWS Marketplace aggregates and curates thousands of applications powered by the AWS cloud.

Amazon has powerful e-commerce tools for subscription management, billing, shopping carts and customer ratings which AWS customers can use to get more third-party customer traction. The AWS Marketplace compliments DevPay and paid AMI’s with a robust e-tailer user experience.

Requirements?

  • AWS Account
  • Application must be running within the AWS cloud

Pricing Model?

Application publishers choose their own price. Currently ISV’s can sell a paid AMI in which case Amazon generates revenue from the EC2 costs when the application is running on an EC2 instance. For turnkey SaaS applications, the AWS Marketplace acts like a referral business, in which case the revenue to AWS is indirect.

Interesting note:

The AWS Marketplace and Amazon Partner Network both launched within days of each other. Amazon is accelerating innovation on multiple fronts for its juggernaut cloud platform. The startup community is pretty much a lock-in. Now the goal is to expand to the enterprise, and Partner Network and Marketplace are two steps toward that goal.

Read more…

A Tale of Two Cloud Search Engines

Sonian Cloud Search and Amazon Cloud Search. Their names may sound the same, but they  couldn’t be further apart in terms of how much they cost to operate and their intended use cases.

Sonian is a veteran “Cloud Search” pioneer. In 2008 we launched the first version of search in the cloud, and today the service operates simultaneously across multiple public clouds using a single reference architecture.

Over the past 4 years we have perfected cloud search scaling and cost efficiencies. It’s been a steep learning curve, but well worth the effort. Today there are over seven billion documents indexed, with fifteen million new documents added each day. Daily index and retrieval volumes rise as new customers sign-up for the service.

The secret to Sonian Cloud Search mastery is a combination of open source and IP developed in-house and detailed metrics to show us information on cost and performance. Every few months improvements are deployed to lower costs and increase reliability. We’ve achieved per-document unit costs to fractions of a cent.

Read more…