Archive for the ‘Sonian’ Category

Chromebox for Meetings Increases Distributed Team Happiness

cbm1Chromebox for Meetings Increases Distributed Team Happiness

A 5 minute read.

On Feb 6 2014 Google announced Chromebox for Meetings a bundle of hardware and software + cloud services packaged to work with the Google Apps platform, specifically Google Calendar and Google Hangouts. Google sourced best of breed A/V components from Jabra, Logitech, Asus/Samsung and combined them with a special ChromeOS build optimized just for Hangouts. A generous supply of HDMI, USB and ethernet cables is also included. Setup took literally 5 minutes. You need to supply your own display, which requires a HDMI input.

Launch pricing is $999 including 1 year of managed meeting service. (For the first month there was a price special with an older version Samsung Chomebox instead of the more powerful Asus now sold.)

I have written previously about how to configure persistent Google Hangouts for virtual conference rooms. Chromebox for Meetings takes this concept a big step forward with seamless calendar integration.

Why Chromebox for Meetings?

Sonian is mostly a distributed development team so it’s vitally important for team cohesion to enable “frictionless” video conferences with really good audio. Scheduling, inviting and managing a team meeting with a mixture of local and remote folks should require zero ceremony. Chromebox for Meetings, while not perfect, helps our team communicate and collaborate better.

Previously I tried to use Chromebooks (the laptop) connected to a Logitech webcam and large flat panel display to create a pseudo meeting experience. These were mounted to A/V carts. This worked for awhile, except the Chromebook settings would occasionally revert to the built-in webcam, or the speaker / mic inputs would change. I quickly realized this was too finicky right about the time Chromebox for Meetings were announced.

My goal was to create an experience where a non-technical person can walk up to the A/V cart and start a meeting by pressing one button.

Unboxing and Setup

In addition to the Chromebox for Meetings kit you will need a HDMI display. I chose an expensive Samsung LED TV. I also mounted the display to a rolling A/V cartVelcro strips will come in handy to neatly secure the cables.

Below is the unboxing for the Samsung Chromebox version.

Read more…

How To: Simple Persistent Google Hangout Virtual Conference Rooms

conference_roomCreate Friendly Named Persistent Google Hangout URLs

A 7 minute read.

This post describes using Google Hangouts for easy virtual meetings by associating a domain or DNS name to a persistent, long-lived, Google Hangout URL.

 

Upon completing these simple instructions you will be able to access and share an easy to remember DNS name to your persistent Google Hangout.

An example persistent friendly URL:

http://acme-hangout.com

http://team-meeting.acme.com

–> Launches a consistent Google Hangout and everyone will be in the same “virtual” room. <–

Persistent Google Hangouts Use Cases

  • Recurring team meetings
  • “Drop-in” published office hours
  • Create a persistent Hangout for each physical conference room
  • Mix physical and virtual meetings

Google Hangouts is a great distributed team productivity tool for virtual meetings. Since the original launch in Fall 2011 Google has incrementally improved the service and filed down the rough edges. But while steadily improving, Hangouts is still not perfect for organizing recurring or ad-hoc virtual meetings.

We’re accustomed to services like GoToMeeting and Webex which offer admin control panels for scheduling recurring events. In contrast, Google Hangouts has no central meeting control panel. Also, when a group meeting is over, and as the last person exits, the meeting disappears into the ether. Resuming a meeting requires a new invitation and is cumbersome.

The online meeting metaphor should be that of a virtual conference room that mirrors how a physical conference room works. In this way, the meeting organizer only needs to publish the conference location (in our case a DNS name) and allow the attendees to “walk in” (join a Hangout) at their own pace. This is in contrast to how Hangouts work now, requiring the meeting owner to pull each attendee into the meeting.

Our goal is to remove the unnecessary ceremony starting a virtual meeting.

We have found a way to connect a DNS name to a “long-lived” persistent Google Hangout URL. A DNS name is easily remembered and simulates a virtual conference room.

2 Easy Steps

First Step:

1. Create a Google Hangout persistent URL.

Second Step:

2a. Redirect a domain to the persistent URL.

or if you are skilled in DNS management, save money and use a sub domain instead of buying a new domain:

2b. Create a DNS zone for a current domain and use “HTTP Redirect” to connect the DNS name to the URL.

Read more…

Abundant Innovation @Sonian: June 2013

Abundant Innovation, Sonian Style – June 2013

I was honored to witness the creativity of the @sonian product development team at our June 2013 All-Team Meet-up. Our tradition the past five years is to hold a 24-hour Codefest where teams self-organize to work on a project they feel passionate about. Nearly all of these passion pursuits benefit Sonian in some way. I’m a lucky CTO / Founder!

For this meet-up 31 people across 9 teams created projects that will help Sonian save money or wow customers with new features. Awards were issued in two categories; Sizzle and Steak. Sizzle for a new customer-facing feature and Steak for an improvement in the behind-the-scenes platform. Each project is highlighted below in their presentation order.

 

Team: Big Brother (Winner: Sizzle Category)
Project: All your IM are Belongs to us

Vikram, Joel and Arnaud teamed up on a project demonstrating the capability to capture IM conversations using a proxy between the IM client and the server. Their implementation captures all conversations, encrypted or not, for compliance and data mining purposes. This is an important requirement for organizations subjected to regulatory information retention obligations.

The data collection is invisible to the end user because there is no software to install. A few issues still need to be resolved such as SSL certificate management and how to capture mobile device IM conversations. Archiving and analyzing IM data is especially important for the financial sector and other regulated industries.

 

Team: House of Targaryen
Project: Making Developers Happy

Alan, Josh, Scott and Ubiratan worked on a project to show how Continuous Integration (CI) techniques can improve quality by finding defects faster. This project demonstrated running tests as code is checked into source control. CI automated processes identify problems before the QA team begins their work, alerting the developer quickly before too much time passes.

This project can be rolled into production with just a few more days of work.

 

Team: Facts and Fun,
Project: Data Analysis for Fun and Profit (and Fun)

David, Raj, Steve and Robert demonstrated a new data management technology with a specialty to manage information that slowly changes over time. This is especially important for new features Sonian plans to roll out later this year.

The practical application is useful for snapshotting corporate directory, email folder coordination and many other use cases. This is also a general support system for data visualizations, meta data, etc. and compliments the use of ElasticSearch. An added bonus is lower operating costs for cloud infrastructure expenses.

Read more…

Sonian Series C Round Media Coverage Summary

When all else fails, start-ups at least get mandatory media attention on their financing stages. Sonian closed a C round and below are a few articles from the mainstream tech media coverage:

 

 

Techcrunch - Sonian Picks Up $13.6M For Cloud Archiving And Search, OpenView New Investor And Strategic Partner

Scott Kirsner’s Friday 5 - The Friday Five, with Rudina Seseri of Fairhaven Capital -  Sonian discussed at minute 4:43 thru 7:15

GigaOM - Sonian gets new funding — but no more from Amazon

 

 

Reflecting on One Year of Cloud Cost Optimization

For the past year I held the unelected position of “Cloud Cost Czar.” I have written about the duties such a role entails in A Day in the Life of a Cloud Cost Czar. Recently I handed over the cost czar responsibility to a colleague who will carry on the daily routines and continue to improve our cloud cost management endeavors. In the handoff process, almost a year to the day of assuming the czar’s responsibilities,  I reflected on the previous twelve months and all the accomplishments the company made as a united team to “tame the cloud.”

I created a graph to visualize the dramatic change over one calendar year. To the right is an area graph that shows subscriber seats (in green) overlaid on subscriber costs (blue, orange and red; our principle costs are cloud compute and two types of cloud storage.)  As subscriber growth increased, costs went up, peaked, and then went down over the course of one year. The rise, peak, and subsequent decline all map to various cost cutting efforts initiated by Sonian engineering and support groups.

Throughout the year we got smarter on how to “purchase” compute time for less than retail, how to store more customer data while consuming less cloud storage, and how to process more customer data using fewer CPU hours. In the cloud, we re-affirmed with a high-five on each improvement, we were in control of our cost destiny. This is when the phrase “infrastructure as code” really means something.

Read more…

A Tale of Two Cloud Search Engines

Sonian Cloud Search and Amazon Cloud Search. Their names may sound the same, but they  couldn’t be further apart in terms of how much they cost to operate and their intended use cases.

Sonian is a veteran “Cloud Search” pioneer. In 2008 we launched the first version of search in the cloud, and today the service operates simultaneously across multiple public clouds using a single reference architecture.

Over the past 4 years we have perfected cloud search scaling and cost efficiencies. It’s been a steep learning curve, but well worth the effort. Today there are over seven billion documents indexed, with fifteen million new documents added each day. Daily index and retrieval volumes rise as new customers sign-up for the service.

The secret to Sonian Cloud Search mastery is a combination of open source and IP developed in-house and detailed metrics to show us information on cost and performance. Every few months improvements are deployed to lower costs and increase reliability. We’ve achieved per-document unit costs to fractions of a cent.

Read more…

Only in the Cloud… Active and Passive Savings

File this one under “amazing but true.”

Today Amazon Web Services customers awoke to find their prices have been lowered for EC2, RDS and Elasticache.

All standard EC2 customers get a 10% discount. This is for doing absolutely nothing. Didn’t have to write more code, didn’t have to plea-with/strong-arm a sales rep, didn’t have to threaten to change vendors. This is the promise of the cloud. A system running on AWS yesterday now costs 10% less to run today.

For AWS customers who “meet Amazon in the middle” … i.e. “you do some work, Amazon does some work,” the savings are more dramatic. Reserved purchase reductions range from 37% to 41%. This is the other positive aspect of the cloud: As a cloud customer, if you are willing and capable to make changes in small increments, savings will add up. The cloud has a continuous history of price reductions in the form of new features and service derivatives. But in order to take advantage you have to write code. S3 Reduced Redundancy is a good example. It’s a flavor of S3 that has a lower price and lower durability. But it’s perfectly fine for storing objects that are less important. But you need to write code to take advantage of this storage class.

The cloud has the dual concepts of “passive savings” and “active savings.”

“Cloud Killed the (SaaS) Rock Star”

“Cloud Killed the (SaaS) Rock Star”…

… well, not literally, but definitely in a figurative sense.

The press release below is the all-points-bulletin heralding the cloud has “won.” Why do I say this? Because LiveOffice, a non-cloud SaaS start-up, couldn’t compete against the new generation of SaaS start-ups powered by true public cloud computing like Sonian.

 

 

LiveOffice was the rock star of SaaS archiving. Ten years in business and they deserve the credit as one of the pioneers to legitimize the SaaS market. When LiveOffice launched a decade ago, they had to operate their own data centers. (This is called “Co-located Powered SaaS.”) But during the past five years, the world changed underneath them. Usually, market dynamics cause this kind of disruption, but the SaaS archiving market size didn’t get smaller, rather it’s bigger than ever. What changed starting in 2007? The advent of the public cloud. Suddenly, any SaaS company running their own data center became vulnerable to competitors able to harness the cloud. This is the beginning of the cloud-powered SaaS era.

Seriously, I wish all the best to the LiveOffice team. Sonian and LiveOffice competed vigorously from 2008 to 2011. Symantec acquired a great team, and the fit between LiveOffice and Symantec makes a ton of sense, and it’s understandable why Symantec made the acquisition.

Although LiveOffice called themselves a “cloud archiving” company, that was stretching the truth. The cloud moniker is so overused at this point, the public is deceived into believing they are using a cloud service, when in fact, it’s really just re-packaging the same old SaaS with a new label.

Why did this Happen?

Operating a SaaS infrastructure on a pure cloud environment is vastly different compared to a co-located system; it’s the reason we’re going to see more of old-world SaaS companies change control or fade away. It will be exceedingly difficult to re-tool a co-located hosted SaaS business to use the cloud. Not impossible, but very difficult. The whole architecture would need to change. I say this having lived in both worlds — with the cloud battle-scars to prove it.

Read more…

FISMA Chronicles: Prologue – Quick Immersion into a New World

In December 2002 the US Congress passed the Federal Information Security Management Act (FISMA).  FISMA requires each government agency to implement policies, procedures, and documentation for information security. This includes internal and external government-run systems, and systems provided by third-party providers.

A flourishing information security practices industry has developed in FISMA’s wake to help guide the government and vendors through the numerous, byzantine certification activities.

The FISMA mission statement is to:

Protect the Nation’s Critical Information Infrastructure

FISMA has three assessment levels and risk profiles.

  • Low – Procedures to manage public-facing government websites, such as data.gov
  • Moderate – Best practices for managing sensitive data and personal identifiable information such as credit card numbers, social security numbers, etc.
  • High – Strict policies for managing military, intelligence and classified information.

The majority of internal applications require FISMA Moderate. The moderate certification process is the focus of this series.

The moderate risk profile means addressing over three hundred controls ranging from information handling, physical media management and threat assessments. The hundreds of controls are categorized into the following “control families”:

  1. Access Control
  2. Awareness and Training
  3. Audit and Accountability
  4. Security Assessment and Authorization
  5. Configuration Management
  6. Contingency Planning
  7. Identification & Authentication
  8. Incident Response
  9. Maintenance
  10. Planning
  11. Personnel Security
  12. Risk Assessment
  13. System and Communication Protection
  14. System and Information Integrity

Many start-ups address the above in various levels of completeness, but may not necessarily have all the supporting documentation to prove compliance. For SaaS systems operating in a cloud environment, the challenge is to describe the control boundaries between the cloud provider and the application layer. For example FISMA requires a policy for physical media disposal. The app layer (i.e. the cloud customer) doesn’t have access to physical media in a cloud environment, so that control is the responsibility of the cloud provider and the app layer inherits the control.  Conversely, the cloud infrastructure has no control over the app layer, and the FISMA requirement to support two-factor web-app authentication is the responsibility of the app layer, not the cloud provider.

FISMA wasn’t designed for a world with cloud computing. It’s heritage back to 2002 is a world with hardware-centric design principles and best practices. Sonian and others are pioneering FISMA Moderate certification in a cloud environment.

Topics I will cover in upcoming issues of the FISMA Chronicles:

  • The impact cloud computing has on FISMA
  • How “agile” start-ups manage ongoing FISMA compliance requirements
  • FEDRamp is the next step to consistent FISMA-like accreditation

 

Image Credit: fismacenter.com

Cloud Success Requires Cost-aware Engineering

This is a true story from the “Cloud Cost Czar Chronicles.”

Our S3 “penny per one thousand” API costs started to rise rapidly in the second half of the cloud infrastructure billing period. We have seen this behavior before, and knew this could be attributed to increased usage, a new defect, or a design flaw that rears its head at a scaling tipping point. My job as “cost czar” is to raise the alarm and work with the team to figure out what was going wrong. At the observed rate of increase, the excess charges would push the monthly bill beyond the budget. One thing we have learned in the cloud, is that costs can rise quickly, but take awhile to go down, since the deceleration effect can be out of proportion to the acceleration if trying to manage expense in a single billing period.

When we started using Amazon Web Services S3 (a PaaS object store) back in 2007, we were acutely aware of the three pricing vectors in effect; storage consumed, price for API calls to store and list data and price for API calls to read and delete data. We’ve been using S3 heavily for five years and we tried to model the “all-in” costs as accurately as possible. But “guestimating” costs beyond the raw storage was stretch. PaaS services have an intrinsic “social engineering” element. If you color outside the lines the financial penalty can be significant. But if you master the pricing game, the rewards are equally as significant. So five years ago we thought as long as we point in the right general direction, “we’ll figure it out as we go along.” Some assumptions proved a positive surprise. Raw storage costs went down. Some surprises not so pleasant; continually wrangling the API usage fees, especially the transactions that cost a penny per thousand, proved to be a constant challenge. But I still like my options with S3 compared to buying storage from a hardware vendor and having to incur the administrative overhead. With S3 we can lower our costs by smarter engineering. With storage hardware, the only way to lower costs is to wrangle a better deal from an EMC sales person. As one of the original “cloud pioneers,” Sonian is not alone in this effort, and it’s been a real eye-opener for software designers to have to think about how their code consumes cloud resources (and expense) at scale. Because whether a penny per thousand or penny per ten thousand, when processing hundreds of millions of transactions a month, any miscalculation suddenly brings a dark cloud raining over your project.

Read more…