5 Benefits of Adopting a Daily Meditation Practice


A 5 minute read.

I recently read Dan Harris’ new book 10% Happier and came away with a new understanding how adopting a daily meditation practice doesn’t have to be hard, nor the pastime of only monks, buddhas and New Age-y hippies, and can help improve health (mental and physical) and overall well being. A daily meditation practice became very accessible to me after reading his own experience.

Dan is an ABC News reporter and he shares his personal journey to realizing the positive benefits of regular meditation and “mindful” thinking. The backdrop is his return to New York after covering the Iraq War and dealing with post traumatic stress situations while pursuing a high-pressure network television career. My backstory isn’t as stress challenged or interesting as a war correspondent, but I found inspiration in his writing.

In the high-tech startup world in which I and my colleagues are immersed in, email, texts, tweets, and blog posts blare at us across all our devices. We’re overloading our brains with a constant stream of sensory input and the problem of disconnecting is becoming epidemic. True epiphanies only emerge when we disconnect and clear our minds. And the world needs more epiphanies.

It used to be that a six hour cross country flight without WiFi was agony, but now I kinda enjoy the opportunity for deep thinking when WiFi isn’t available and I’m not tempted to “check in.” Being connected and available to the team is a problem of our own making. So the collective “we” needs to find a happy medium between always being available and purposeful disconnection. For me mediation is a way to achieve healthy balance.

I approached mediation the same way I integrated a regular exercise routine over the past years. Started with simple goals, didn’t beat myself up if I missed a day, and just gradually eased into a cadence.

Here are my five benefits of daily 30 minute meditation:

Read more…

Chromebox for Meetings Increases Distributed Team Happiness

cbm1Chromebox for Meetings Increases Distributed Team Happiness

A 5 minute read.

On Feb 6 2014 Google announced Chromebox for Meetings a bundle of hardware and software + cloud services packaged to work with the Google Apps platform, specifically Google Calendar and Google Hangouts. Google sourced best of breed A/V components from Jabra, Logitech, Asus/Samsung and combined them with a special ChromeOS build optimized just for Hangouts. A generous supply of HDMI, USB and ethernet cables is also included. Setup took literally 5 minutes. You need to supply your own display, which requires a HDMI input.

Launch pricing is $999 including 1 year of managed meeting service. (For the first month there was a price special with an older version Samsung Chomebox instead of the more powerful Asus now sold.)

I have written previously about how to configure persistent Google Hangouts for virtual conference rooms. Chromebox for Meetings takes this concept a big step forward with seamless calendar integration.

Why Chromebox for Meetings?

Sonian is mostly a distributed development team so it’s vitally important for team cohesion to enable “frictionless” video conferences with really good audio. Scheduling, inviting and managing a team meeting with a mixture of local and remote folks should require zero ceremony. Chromebox for Meetings, while not perfect, helps our team communicate and collaborate better.

Previously I tried to use Chromebooks (the laptop) connected to a Logitech webcam and large flat panel display to create a pseudo meeting experience. These were mounted to A/V carts. This worked for awhile, except the Chromebook settings would occasionally revert to the built-in webcam, or the speaker / mic inputs would change. I quickly realized this was too finicky right about the time Chromebox for Meetings were announced.

My goal was to create an experience where a non-technical person can walk up to the A/V cart and start a meeting by pressing one button.

Unboxing and Setup

In addition to the Chromebox for Meetings kit you will need a HDMI display. I chose an expensive Samsung LED TV. I also mounted the display to a rolling A/V cartVelcro strips will come in handy to neatly secure the cables.

Below is the unboxing for the Samsung Chromebox version.

Read more…

3 Rules for Perfect Email

Your career success hinges on writing easy to understand emails. Now more than ever, teams co-create via the written word versus in-person meetings. Mastering the art of writing great emails will improve personal and group communication thus spurring professional growth.

There have been many posts on the subject “getting to inbox zero,” which means various organizing strategies and productivity tools for managing an overflowing inbox.

This post is about a different angle to email management: shining a spotlight on the sender’s responsibility, not the recipient’s, for effective email communication.

Let’s face it, an email exchange becomes a TODO for the recipient. They have to read the message, and then, in many cases, respond to questions or clarify assertions. The sender needs the message to be remarkable in some way to get a faster reply. This means the sender must put some thought into how the message will be read and acted upon.

3 Common Sense Rules for Perfect Email Communication

  1. Subjects Matter
  2. Use Generous White Space
  3. Format the Message for Easy Replies

 Rule 1. Subjects Matter

Start with a specific, compelling subject line. Use three to five descriptive words to help your reader understand context without having to read the message. This will help them prioritize your update. Sometimes it takes more time to craft the message subject then the actual content.

Read more…

The Cheap Cloud versus The Reliable Cloud

5 Lessons Learned from June 29 2012 AWS Outage

Discussing a difficult situation is never fun, and I have been wrestling with how to start this post. It’s about revealing unpleasant cloud truths. And not necessarily the truths you might be expecting to hear. I am not here to preach, but my message to you is important. For the past five years I have been working on a project that uses the cloud to it’s fullest potential, celebrating the victories and learning from the defeats.

I’m speaking to my fellow Amazon cloud citizens. My co-tenants, if you will, in the “Big House of Amazon.” We’re all living together in this man-created universe with its own version of “Newtonian Laws” and “Adam Smith” economics. 99.99% of the time all is well… until out of the blue it’s not, and chaos upends polite cloud society.

If you lost data or sustained painful hours of application downtime during Amazon’s June 29 US-East outage, then you can only wag your finger in blame while looking in the mirror.

I know, I know, the cloud is supposed to be cheap AND reliable. We’ve been telling ourselves that since 2007. But this latest outage is an important wake up call: we’re living in a false cloud reality.

Lesson 1: Follow the Cloud Rules

Up front, you were told the “rules of the cloud”:

  • Expect failure on every transaction
  • Backup or replicate your data to other intra-cloud locations
  • Buy an “insurance policy” for worst case scenarios

These rules fly against the popular notion that the cloud is “cheaper” than do-it-yourself hosting.

There is a silver lining to this dark cloud event. Everyone in the cloud will learn and improve so we don’t have to repeat this episode ever again.

Read more…

Reflecting on One Year of Cloud Cost Optimization

For the past year I held the unelected position of “Cloud Cost Czar.” I have written about the duties such a role entails in A Day in the Life of a Cloud Cost Czar. Recently I handed over the cost czar responsibility to a colleague who will carry on the daily routines and continue to improve our cloud cost management endeavors. In the handoff process, almost a year to the day of assuming the czar’s responsibilities,  I reflected on the previous twelve months and all the accomplishments the company made as a united team to “tame the cloud.”

I created a graph to visualize the dramatic change over one calendar year. To the right is an area graph that shows subscriber seats (in green) overlaid on subscriber costs (blue, orange and red; our principle costs are cloud compute and two types of cloud storage.)  As subscriber growth increased, costs went up, peaked, and then went down over the course of one year. The rise, peak, and subsequent decline all map to various cost cutting efforts initiated by Sonian engineering and support groups.

Throughout the year we got smarter on how to “purchase” compute time for less than retail, how to store more customer data while consuming less cloud storage, and how to process more customer data using fewer CPU hours. In the cloud, we re-affirmed with a high-five on each improvement, we were in control of our cost destiny. This is when the phrase “infrastructure as code” really means something.

Read more…

A Tale of Two Cloud Search Engines

Sonian Cloud Search and Amazon Cloud Search. Their names may sound the same, but they  couldn’t be further apart in terms of how much they cost to operate and their intended use cases.

Sonian is a veteran “Cloud Search” pioneer. In 2008 we launched the first version of search in the cloud, and today the service operates simultaneously across multiple public clouds using a single reference architecture.

Over the past 4 years we have perfected cloud search scaling and cost efficiencies. It’s been a steep learning curve, but well worth the effort. Today there are over seven billion documents indexed, with fifteen million new documents added each day. Daily index and retrieval volumes rise as new customers sign-up for the service.

The secret to Sonian Cloud Search mastery is a combination of open source and IP developed in-house and detailed metrics to show us information on cost and performance. Every few months improvements are deployed to lower costs and increase reliability. We’ve achieved per-document unit costs to fractions of a cent.

Read more…

FISMA Chronicles: FedRAMP, Inheritance and Key Controls

Part 2: FedRAMP, Inheritance and Key Controls

I am leading the FISMA project at Sonian, and we’re getting closer to achieving our first FISMA Moderate accreditation. For background on FISMA, read my first blog post on this subject.

With FISMA Moderate accreditation, Sonian will be able to manage non-defense government data. The accreditation is granted in the form of an “Authority to Operate (ATO)” bestowed upon a project by the government agency that will implement and utilize the product/service. A cyber security team within the government agency evaluates each project’s security documentation and gives the thumbs up or thumbs down. It’s an iterative process, that starts with extensive documentation, and audit, and government review and oversight. FISMA applies to both third party services purchased by the government, as well as internally developed and managed IT projects.

FedRAMP… Briefly

Currently, if a vendor wants to sell the same IT service to more than one government agency, FISMA requires an ATO from each agency, which adds time, complexity and cost to the procurement process. Historically, each agency has implemented and interpreted FISMA standards differently. The National Institute of Standards and Technology (NIST) devised the “FISMA Reference Architecture” for all agencies to follow, but in reality the local interpretation has varied. A “new and improved” accreditation standard is supposed to fix some of these issues. FedRAMP is a single umbrella guideline encompassing current FISMA rules, as well as updated rules that better align FISMA with technologies such as Software as a Service (SaaS) and cloud computing. When the legislation that created FISMA was drafted in 2002, SaaS and cloud computing were not on government technologist’s radar. FedRAMP is a modernization of FISMA, and also strives to streamline government IT purchasing, lower costs, and expedite project time lines. FedRAMP will benefit from FISMA’s first decade, so I am hopeful for an improved certification process when FedRAMP is officially ratified in about a year. There is already quite a bit known about FedRAMP and Sonian is working on a dual strategy to get FISMA Moderate for one agency, and then focus on FedRAMP for all other agencies.

Read more…

Amazon “Partnering” for Enterprise Cloud Success

GigaOM‘s Om Malik is reporting on a new business development partnership between Amazon Web Services and Eucalyptus Systems. Eucalyptus is the startup providing an open source implementation of the AWS cloud APIs. Eucalyptus allows customers to build their own “private” clouds with AWS API compatibility.

Smart move on Amazon’s part. Amazon’s amazing cloud success puts them in a unique position to maintain a commanding lead in public cloud infrastructure, and now with this partnership they have a great story to tell that bridges the gap between large-enterprise private clouds and their market-leading public cloud.

Enterprise cloud adoption success needs two crucial ingredients combined at the right inflection of market uptick. The first is applications and the second is a credible story how a “private cloud” can evolve to using public cloud resources.

Since Eucalyptus is the open source equivalent of the core AWS API’s, it seems natural and expected for Amazon to partner with the five year old Calif. firm. It’s also noteworthy that neither Amazon nor Eucalyptus want to characterize their partnership as a “hybrid cloud” play. Amazon probably feels that their ability to drive down costs will eventually attract every business to their cloud, over time. so partnering with the company that created the open source AWS API implementation is a great cloud on-ramp strategy.

As for applications, companies like Sonian are already proving that a public cloud is the best infrastructure to support an enterprise-focused SaaS service. Like Eucalyptus, Sonian is also a five year old cloud start-up.  The cloud makes it possible for Sonian to exist, while at the same time the cloud needs services like Sonian to solve a business pain point with an application built from the ground-up to use a public cloud.

It’s amazing what the “new” cloud-industry has accomplished in the past five years. Growth, innovation, and nothing less than a complete paradigm shift in Enterprise IT.







FISMA Chronicles: Prologue – Quick Immersion into a New World

In December 2002 the US Congress passed the Federal Information Security Management Act (FISMA).  FISMA requires each government agency to implement policies, procedures, and documentation for information security. This includes internal and external government-run systems, and systems provided by third-party providers.

A flourishing information security practices industry has developed in FISMA’s wake to help guide the government and vendors through the numerous, byzantine certification activities.

The FISMA mission statement is to:

Protect the Nation’s Critical Information Infrastructure

FISMA has three assessment levels and risk profiles.

  • Low – Procedures to manage public-facing government websites, such as data.gov
  • Moderate – Best practices for managing sensitive data and personal identifiable information such as credit card numbers, social security numbers, etc.
  • High – Strict policies for managing military, intelligence and classified information.

The majority of internal applications require FISMA Moderate. The moderate certification process is the focus of this series.

The moderate risk profile means addressing over three hundred controls ranging from information handling, physical media management and threat assessments. The hundreds of controls are categorized into the following “control families”:

  1. Access Control
  2. Awareness and Training
  3. Audit and Accountability
  4. Security Assessment and Authorization
  5. Configuration Management
  6. Contingency Planning
  7. Identification & Authentication
  8. Incident Response
  9. Maintenance
  10. Planning
  11. Personnel Security
  12. Risk Assessment
  13. System and Communication Protection
  14. System and Information Integrity

Many start-ups address the above in various levels of completeness, but may not necessarily have all the supporting documentation to prove compliance. For SaaS systems operating in a cloud environment, the challenge is to describe the control boundaries between the cloud provider and the application layer. For example FISMA requires a policy for physical media disposal. The app layer (i.e. the cloud customer) doesn’t have access to physical media in a cloud environment, so that control is the responsibility of the cloud provider and the app layer inherits the control.  Conversely, the cloud infrastructure has no control over the app layer, and the FISMA requirement to support two-factor web-app authentication is the responsibility of the app layer, not the cloud provider.

FISMA wasn’t designed for a world with cloud computing. It’s heritage back to 2002 is a world with hardware-centric design principles and best practices. Sonian and others are pioneering FISMA Moderate certification in a cloud environment.

Topics I will cover in upcoming issues of the FISMA Chronicles:

  • The impact cloud computing has on FISMA
  • How “agile” start-ups manage ongoing FISMA compliance requirements
  • FEDRamp is the next step to consistent FISMA-like accreditation


Image Credit: fismacenter.com

Cloud Success Requires Cost-aware Engineering

This is a true story from the “Cloud Cost Czar Chronicles.”

Our S3 “penny per one thousand” API costs started to rise rapidly in the second half of the cloud infrastructure billing period. We have seen this behavior before, and knew this could be attributed to increased usage, a new defect, or a design flaw that rears its head at a scaling tipping point. My job as “cost czar” is to raise the alarm and work with the team to figure out what was going wrong. At the observed rate of increase, the excess charges would push the monthly bill beyond the budget. One thing we have learned in the cloud, is that costs can rise quickly, but take awhile to go down, since the deceleration effect can be out of proportion to the acceleration if trying to manage expense in a single billing period.

When we started using Amazon Web Services S3 (a PaaS object store) back in 2007, we were acutely aware of the three pricing vectors in effect; storage consumed, price for API calls to store and list data and price for API calls to read and delete data. We’ve been using S3 heavily for five years and we tried to model the “all-in” costs as accurately as possible. But “guestimating” costs beyond the raw storage was stretch. PaaS services have an intrinsic “social engineering” element. If you color outside the lines the financial penalty can be significant. But if you master the pricing game, the rewards are equally as significant. So five years ago we thought as long as we point in the right general direction, “we’ll figure it out as we go along.” Some assumptions proved a positive surprise. Raw storage costs went down. Some surprises not so pleasant; continually wrangling the API usage fees, especially the transactions that cost a penny per thousand, proved to be a constant challenge. But I still like my options with S3 compared to buying storage from a hardware vendor and having to incur the administrative overhead. With S3 we can lower our costs by smarter engineering. With storage hardware, the only way to lower costs is to wrangle a better deal from an EMC sales person. As one of the original “cloud pioneers,” Sonian is not alone in this effort, and it’s been a real eye-opener for software designers to have to think about how their code consumes cloud resources (and expense) at scale. Because whether a penny per thousand or penny per ten thousand, when processing hundreds of millions of transactions a month, any miscalculation suddenly brings a dark cloud raining over your project.

Read more…