Archive for the ‘Cloud Compute’ Category

5 Enterprise Tech Trends to Watch in 2016

mainframe computerAlso posted to Sonian Blog.

Sonian’s archive, search and analytics platform exists at the intersection of cloud, big data and machine learning. Over the past 8 years we have pioneered many initiatives in order to harness the cloud to solve a hard problem: fast and reliable full-text search and analytics for tens of billions of emails and attachments.

Our greater mission is to help enterprise IT migrate their on-premises systems to the cloud, and we know that archiving and information governance workloads are the first to “move on up to the top.”

We have a knack for identifying enterprise tech trends that are at the beginning of the adoption curve and want to share five trends were monitoring in 2016.

1. IT Becomes the Department of “Yes”

IT’s influence has waned in many organizations because IT, by their own volition, became the “department of no.” Which is ironic since IT previously was the group that brought innovation to their respective businesses. Innovation that gave a competitive advantage. But even IT couldn’t keep up with the pace of innovation in the cloud era and fell victim to legacy thinking and line of business managers found IT obstructing progress. That’s why Salesforce.com, Workday, Hubspot, Intacct, etc. took off. In fact there are over 1,400 enterprise SaaS apps that can be procured without IT involvement. Business managers “pushed ” IT aside and implemented their own solutions. But IT is poised to come roaring back to being relevant.

This situation is about to change. IT will reinvent itself in 2016 and become the department of yes.

In fact, the CIO role will be redefined dramatically. CIO historically means Chief Information Officer, but in 2016 that will shift to Chief Innovation Officer. And a new role is emerging called Chief Data Officer. CIO and CDO responsibilities will merge together as part of ITs’ resurgence.

IT departments will be smaller and more efficient. They will focus on the more value-add services and let the “cloud” manage the undifferentiated heavy lifting. No longer is managing a on-premises Microsoft Exchange server a value add. The cloud can deliver commodity-priced email cheaper than self-managed. This means fewer people are needed and their skills need to be upgraded to focus more on the business needs and less on the mundane technical tasks.

It’s the end of “average IT.”

Read more…

Why I am excited about AWS Lambda

For me, learning about AWS Lambda was the most exciting AWS re:Invent announcement in 2014. Lambda is the back-end compliment to the front-end AWS Javascript API SDK released a year ago. During the 2013 AWS re:Invent Werner Vogles said onstage the AWS JS API SDK was one of the more exciting announcements in 2013 because it made possible single page web apps to run directly on S3 (and DynamoDB, etc.) without the need for EC2. Lambda completes the big picture.

What is AWS Lambda?

AWS Lambda is a new cloud service from AWS that completely abstracts the infrastructure needed to run your code. It’s the further manifestation of “let someone else handle the undifferentiated heavy lifting”. You write your code (Javascript for now) and AWS Lambda runs it in a dedicated virtual language runtime. You do not have to think about provisioning, sizing & monitoring EC2 nodes. 7 years ago the introduction of EC2 was revolutionary because it abstracted application stacks from data center provisioning & operations, and now Lambda leaps forward to offer “infrastructure-less” environments (sure… there are servers running “somewhere,” its just you don’t have to think about them!)

You only pay for execution time. Right now $0.00001667 for every GB-second used and $0.20 per 1 million requests thereafter ($0.0000002 per request). In typical AWS fashion the pricing model is multi-dimensional. You pay for requests to run your Lambda function and duration. Duration is a price based on time and GB memory allocated. Memory allocations range from 128 MB to 1 GB.

Why is AWS Lambda so exciting?

Lambda is in essence a return to massive shared compute environments, but with a modern twist. I learned to program on a CDC Control Data mainframe and IBM System/36. Each of these environments supported the concept of multiple jobs running at the same time, with a “human” scheduler coordinating start, stop, error recovery, duration and resource allocation. As a student, I had to “schedule” my homework assignments with the CDC controller, which often meant late nights in the computer lab. AWS Lambda is all this, but via API and you get to be the controller.

AWS Lambda turns EC2, S3, SNS, DynamoDB & Kinesis (and soon all AWS services) into a collection of shared resources you harness to both trigger your application or be the recipient of your application’s computational output. A classic example is an image file is uploaded to S3 that needs to be resized. With Lambda, upon the POST, you can have S3 trigger Lambda to run your Javascript resize functions. The input is the original image upload and the output are the resized image objects, stored back to S3. The output can also be further triggers to other AWS services, like updating DynamoDB or sending an SNS alert.

Virtual machines leapfrogged bare hardware
EC2 leapfrogged virtual machines
Containers leapfrogged EC2
Lambda leapfrogs VM, EC2 and containers

Each progression was toward ever more efficient utilization of the underlying physical resource (i.e. a host computer.) At the end of the day all these virtual services consume physical resources (metal, electricity, real estate) and Lambda is the next step toward extreme efficiency on the backend.

Read more…

AWS re:Invent Keynote Summaries

aws-reinventKeynote 1 – Andy Jassy

TL;DR summary:

Andy Jassy announced 8 new services in the opening keynote. The AWS SVP said “the new normal is that security and compliance are becoming reasons customers are moving to the cloud.” It used to be security and compliance were reasons to stay out of the cloud. Complete 180 in prior 5 years.

Theme: AWS has already won the hearts and minds of startups and ISVs

New focus… What does the “enterprise” need to embrace the cloud?

Day 1 Keynote revealed these 8 new enterprise focused features

  • Amazon Aurora

    • Super fast, resilient SQL as a service. $.29 cents an hour. This is an attack on Oracle. 11’9s reliable, 5x faster than existing MySQL on AWS.

  • AWS Key Management Service

    • Multi-faceted fully managed encryption key service with automatic key rotation, auditing, and compliance certifications.

  • AWS Config

    • Track and “scenarioize” config changes before implementing. Very important for enterprises which are accustomed to on-premises ITIL tools and need cloud equivalents.

  • AWS CodeDeploy, CodeCommit and CodePipelines & Apollo

    • A suite of internal tools AWS has been using for 19 years now available for free to help enterprise developers more efficiently adopt new innovation practices and increase agility. “Agility” was a theme mentioned over and over. Cloud is more than low cost, it’s an agility amplifier.

  • AWS ServiceCatalog

    • Enterprises want to publish internal catalogs of approved IT cloud services and want internal groups to be self-sufficient. This service helps in that area.

    • Think of this as the “corporate portal to cloud.”

  • Amazon / AWS  Core Values (a couple of the dozen) cited in keynote:

    • Work backward from customer… and really mean it… don’t pay lip service to this mantra

      • This also means basically ignore competitors, unless customers tell you they need something a competitor is already doing.

    • Pioneer & Invent new technologies for the long-term

      • Legacy IT vendors have lost their innovation gene… AWS fills the void.

Read more…

Is it time for “Installed” Software as a Service?

oxymoron2

A 3 minute read.

“Installed Software as a Service” sounds like an oxymoron. But it’s actually starting to happen and will accelerate even more.

Most enterprises embrace Software as a Service as their preferred method to solve an IT problem. Whether archiving (Sonian), CRM (sf.com), marketing (Hubspot), customer service (Zendesk) or accounting (Netsuite) there is a SaaS offering for nearly every need. It seems only the largest or most security sensitive organizations are not using SaaS. “Installed” SaaS is a delivery method that will make everyone happy about SaaS.

SaaS architectures are designed around massive multi-tenant services with appropriate per tenant (i.e. a customer) security. Multi-tenancy is for economies of scale (we all love SaaS’ low pricing, right?), but less desirable for the customer since data is commingled. Customers desire the managed aspect of of SaaS, but also want to control their own data. Managed by others with data control was a feat too hard or too expensive to accomplish, until now. Technology and market demand are aligning to give customers what they want.

The next wave of SaaS will provide the cost efficiencies of multi-tenancy with the security posture of single tenant. SaaS vendors will offer customers to “install” the service into the customer’s own cloud account. The SaaS vendor will still manage the software, but the customer will have ultimate control over the data structures. Costs will be higher for this type of offering, but customers are willing to pay more for their own control, and will still cost less than traditional on-premises self-managed.

How is “Installed SaaS” possible?

Three emerging technology trends make installed SaaS possible.

The first is the significant amount of devops automation that has matured over the prior seven years. Small teams are using mature tools and processes to fully automate cloud provisioning and software installation to manage massive multi-tenant stacks. This same tooling can manage many single tenant stacks with similar efficiency. Not as efficient as fully multi-tenant, but pretty close.

The second is technologies such as Docker (and containers in general) as well as new cloud capabilities from Amazon Web Services (and others following quickly) such as VPC, Encryption Key Management, Identity and Access Manager & Cloud Native Directory Services. These are all the ingredients SaaS vendors need to “install” into a customer’s cloud account. And now with well documented information security boundaries. With this configuration customers can have a “master” kill switch to cut off external access to their data files. CIOs love this idea.

The third is a new breed of third-party services that can independently “audit” a cloud environment for compliance, security and access. Projects such as Conjur are working on this. Another innovative project is CloudHealth which can monitor cost efficiencies for many single tenant installations and provide automatic cloud infrastructure optimization.

SaaS vendors will need to modify their stack architectures to deliver “installed” SaaS, so there needs to be customer demand to justify the expense. Customers are just now starting to ask for this operating mode.

Read more…

It’s an Amazon Web Services World After All

truman-showA 2 minute read.

“It’s an AWS world, and we’re just living in it.”

Reflecting upon the first full day of AWS re:Invent 2013 my mind is spinning with conflicting thoughts. I’m trying to grasp the significance of what AWS is today and more importantly what it will be in the coming years. And also trying to anticipate where to focus innovation for the areas AWS will not venture into.

My DNA is systems management and creating solutions to fix IT pain points. But in AWS the opportunity to “solve the cloud’s” management problems is getting smaller with every new AWS release. I cheer the pace of innovation, until it commoditizes my offering.

My colleague Jenn McAuliffe used this analogy to describe her feelings about AWS. She said “We’re all in the Truman Show.” A completely enclosed world where all our needs are met in nearly nirvana-ish fashion. And the controlling forces outside this perfect world are studying the inhabitants behavior. This may sound far fetched… but the sentiment registered with me. AWS is so far ahead of every other cloud infrastructure, at some point startups and enterprises will need to decide whether they double-down on AWS and lock-in, or choose another path of least common denominator subsistence on an IaaS-only cloud.

So back to “where to innovate” in an AWS world? Focus on the application layer and serving audiences where you have a command of their domain expertise. Maybe you want to scratch your own itch, just as long as it’s not core cloud building block deficiencies. AWS will chip away at those, filing down the rough edges, more quickly than we witnessed in the Microsoft dominance era, or the Novell era before that.

It’s not your father’s enterprise IT anymore.

My Friction-Free Life Courtesy of Google Services

Over the weekend it struck me how different (i.e. frictionless & efficient) my information work-flow has become because of all the Google services I use. It’s part of my “cloud-first” mindset when thinking about creating and sharing content. And I use the term “content” in the broadest meaning; email is content, a document is content, this blog post is content, even a “tweet” I consider content.

Here is how I got started with “cloud-first” thinking:

 

 

1. Gmail

April Fools Day 2004, almost nine years ago, I made a dramatic email paradigm shift. I left Outlook and jumped whole heart into Gmail. With Outlook I obsessively organized incoming email into byzantine folder structures. Projects, customers, personal, business. For some reason whiling away the hours organizing my email made me feel good, but that was in reality a ”false high.” And to top it off a wasted effort; the folder structure became stale over time.

Gmail, with it’s folder-free, conversation-centric, fast search approach to email management was the complete opposite user experience and it just “clicked” for me.

“How could I have not seen this before?” It took thinking outside the (in)box to transform email. No more dragging to folders. Simple tagging works better. Conversations threaded automatically. Woot!

2. Google Apps

In 2007 I started using Google Apps for content creation. A similar eureka moment occurred. Just like moving from Outlook to Gmail, moving from Word + Excel to GApps Docs + Spreadsheets was a fresh, modern approach to collaborative content creation. There was so much friction in the old world. Working on a shared document required emailing the file around or keeping track of versions on a file share. With GDocs the editing was in place, versions maintained, and collaboration speed increased. Now I get hives when someone sends me a Word file looking for comments and edits.

We’re fast approaching the era where the “file,” residing on a file system, will not be the default work product unit. It will be a shared document in a collaboration space designed for multi-user editing.

It took some patience with Google as they incrementally improved Gapps. But today it’s pretty good and getting better faster.

Read more…

8 Coud Predictions for 2013

A few publications asked for “2013 Cloud Computing Predictions.” Sonian has been at the center of “cloud” since 2007, so I have a unique perspective to share. So despite the obvious prediction… (there will be ”clouds” in 2013) below are eight realistic expectations for the state of cloud computing throughout the year 2013.

1. The definition of “Cloud” will become clear
The years 2008 through 2012 started the “cloud computing” conversation, but there is quite a bit of “cloudiness – pun intended” about what the term cloud really means. Commodity-priced public clouds like Amazon Web Services and Rackspace compete for mindshare with hybrid and private cloud wares from Citrix, VMWare and others. Each camp uses the same terms interchangeably, which confuses the IT decision maker. The truth is, most businesses will use a combination of public and private cloud services. This is because there are some use-cases where the public cloud is simply the best value per IT budget dollar. And there are other examples where a unique requirement calls for a private cloud solution.

Throughout 2013 the public cloud providers will do a better job to differentiate their offerings from private cloud vendors. Public cloud vendors will showcase economics and security postures that will be very appealing to mid-size businesses. As more medium-sized organizations find cloud success, even enterprises will start to investigate their cloud options.

2. Enterprise IT will embrace cloud computing with at least  three production or research and development projects using a public cloud
The past five years of physical server migration to server room virtualization pave the way for the next big wave, which is to use “cloud” for some IT workloads. Many businesses have identified a few projects where testing public cloud is budgeted and planned for 2013. Applications that consume large quantities of storage or have dynamic (elastic) compute needs are the first ideal candidates.

However, many IT decision makers do understand we are at the beginning of a decade long migration, and there will be a lot of experimentation before massive wholesale cloud adoption is mainstream in the Global 2000.

3. The “Virtuous Cycle of Cloud Computing” will become obvious
Cloud computing represents new thinking on the “economies of scale” factoring into very large infrastructure purchasing dynamics. For example, as more customers use cloud compute and storage, the cloud vendors in essence, make larger purchases. Buying more lowers their costs, which in turn, allows the cloud vendors to drop prices. Lower prices encourages more customers to buy into the cloud, and the cycle repeats itself.

The IT industry has never before witnessed the positive effect of large bulk purchases, shared across hundreds of thousands of IT consumers. This will commoditize services for a very large buying audience. The closest allegory might be when government sponsors research (examples: the Internet, NASA) and then the private sector continues the innovation after the research phase.

Read more…

Cloud Cost Savings In Action

This morning Amazon Web Services notified its cloud customers a new CPU configuration is available in all regions. This new virtual CPU type is hi1.4xlarge, and is significant in a number of ways. Amazon heard from customer a high I/O, low latency configuration would be ideal for applications like relational and NoSQL databases. It’s also the first EC2 instance type to use SSD storage. Netflix, like Sonian, a beacon of cloud success, has already shared a great benchmark study showing how this new instance will improve performance and lower costs.

Wow… more performance… and lower costs. This trend tracks back to a previous post I wrote about active and passive cloud cost savings. The introduction of this new instance type creates an “optimization opportunity.” If we cloud customers are willing to invest engineering resources to optimize our software around a new instance type, that is an example of “active savings.” We have to apply effort to realize a cost reduction. On the other hand, if AWS simply lowers the price of an existing instance type, that is an example of passive savings. Just happens automatically.

This is the cloud’s grand bargain. Cost efficiencies flow from infrastructure provider, through the application layer, to the end customer.

The Cheap Cloud versus The Reliable Cloud

5 Lessons Learned from June 29 2012 AWS Outage

Discussing a difficult situation is never fun, and I have been wrestling with how to start this post. It’s about revealing unpleasant cloud truths. And not necessarily the truths you might be expecting to hear. I am not here to preach, but my message to you is important. For the past five years I have been working on a project that uses the cloud to it’s fullest potential, celebrating the victories and learning from the defeats.

I’m speaking to my fellow Amazon cloud citizens. My co-tenants, if you will, in the “Big House of Amazon.” We’re all living together in this man-created universe with its own version of “Newtonian Laws” and “Adam Smith” economics. 99.99% of the time all is well… until out of the blue it’s not, and chaos upends polite cloud society.

If you lost data or sustained painful hours of application downtime during Amazon’s June 29 US-East outage, then you can only wag your finger in blame while looking in the mirror.

I know, I know, the cloud is supposed to be cheap AND reliable. We’ve been telling ourselves that since 2007. But this latest outage is an important wake up call: we’re living in a false cloud reality.

Lesson 1: Follow the Cloud Rules

Up front, you were told the “rules of the cloud”:

  • Expect failure on every transaction
  • Backup or replicate your data to other intra-cloud locations
  • Buy an “insurance policy” for worst case scenarios

These rules fly against the popular notion that the cloud is “cheaper” than do-it-yourself hosting.

There is a silver lining to this dark cloud event. Everyone in the cloud will learn and improve so we don’t have to repeat this episode ever again.

Read more…

Synchronize Your Open Chrome Tabs

I have been using Chrome’s new “Open Tab on Remote Device” capability ever since it was first introduced months ago. It’s a great productivity compliment to “pinning a tab.” From any device I use on a regular basis (Macbook Air, Mac Mini, iPad or Android) my open Chrome tabs are synchronized and available. This is different from synchronized bookmarks. This provides a whole new level of fluidity between the devices where I access the Internet. And pretty much all my information processing and content creation is via a web browser.

For example, if I leave a web site/app open on the shared Mac Mini in the kitchen I can continue to access the same site on my personal Macbook Air, or any other device that supports the Chrome browser. (Today that now includes all iOS devices.) This functionality is not some third-party add-in, but rather a fully supported built-in feature.

Here is what the UI experience looks like from my Macbook Air:

This is the “Open a new tab” screen showing my frequent sites. Notice the “Other Devices” at the bottom?

 

 

Here is the Chrome settings screen:

Tick the “Open Tabs” option to enable this great new feature.