October 21, 2019
Part 4 of 5: Seeing Clearly Through the “Clouds”
Europeans visiting San Francisco in the summer are often prepared for cloudy weather but are instead greeted by a dense fog that often blurs their expected postcard view of the Golden Gate Bridge. Similarly, a fog often appears when businesses embrace cloud-based service providers and expect an immediate one-stop solution for avoiding the “red tape” of procurement, racking servers, and acquiring IP addresses and namespaces. While moving your infrastructure to the cloud can solve many of these problems, and help minimize staffing, there is much to be considered before making such a transition.
Back in 1999 when Eric Mathewson approached me to join WideOrbit, cloud-based SaaS such as Salesforce had just started to roll out. At the time, Professor Ramnath Chellapa of Emory University’s 1997 vision of “Cloud Computing” as the new “computing paradigm, where the boundaries of computing will be determined by economic rationale, rather than technical limits alone,” was not yet possible. This was mainly due to the high cost of hardware, bandwidth, and other limiting factors. What Salesforce and WideOrbit delivered in 1999 was closer to the massive hardware found in “co-location” facilities where we had servers in data centers, and we had virtualization, but still owned a large amount of pricey hardware.
2005 gave us “Amazon Elastic Compute Cloud,” and “Amazon Machine Images.” Microsoft Azure launched in 2009. Next came Google’s “Cloud Platform”, App Engine in 2011 and then finally Google’s Infrastructure as a Service (IaaS). But for most platforms, short of building new architecture to leverage things like dynamic load-balancing, containerization, microservices, and elasticity with Kubernetes, this was no better than what we had 10 years earlier with co-located facilities and virtualization. Leveraging the newest technology gave our engineering team a lot to work on, and we had come a long way since 2005, but we weren’t close to optimized. Today, we are actively enhancing the platform to leverage these new technologies. In addition, as we help our clients support the digital convergence, we are finding more advantages to a central platform for the monetization of media. In the years of experience of streaming audio and video for both live and on-demand content, we have definitely seen the advantages of “cloud” events that can scale from thousands of viewers to millions with little notice.
So, what exactly do these public cloud vendors such as AWS, Azure, and GCP offer? For starters, they are located in “Tier IV” data centers with full redundancy of subsystems, compartmentalized security zones with biometric access controls, independently dual-powered cooling equipment, and fault-tolerant electrical power distribution. On top of that you may get additional protection in the form of geographically dispersed data centers.
While the cloud provides a lot of convenience and the ability to quickly scale, it can also result in data loss, data breaches, or worse. Many customers assume that simply by using a cloud service, backups and disaster recovery will be taken care of without any need for input. IT managers typically weigh the cost of hardware, centralized infrastructure, and its convenience with the cost of offerings from public cloud providers. But that is where things get difficult. As we all know, convenience and collaboration often lead to unintended security issues.
Bottom line, what most public cloud providers offer is part of an overall solution. You may get an operating system, but you’re still responsible for patches. You may get the three nines of 99.9% availability, but that does not translate into availability of hosted services. You may get a database instance with redundancy, but you will need your own backups and encryption. You may get a firewall, a VPC and other security measures, but you will still need to monitor tripwires and protect against phishing and ransomware attacks. Automatic encryption between data centers will be provided, but a disaster recovery plan is still necessary.
In the end, the public cloud may be a small commodity portion of the offering, but your SaaS vendor needs to leverage it properly and work closely with your IT staff. At WideOrbit, we are constantly working with these same cloud providers to optimize our offering (this often means leveraging two vendors, which came in handy a few times when AWS had a global outage). Part of the offering is indeed a “managed service”, which takes care of optimization of middle and database tiers, clustering, application performance, and security. Cloud solutions still have to be configured by IT to include regular snapshots, backups, and disaster recovery for all tiers of service. This is potentially problematic for employers, especially if regulatory compliance is part of the picture. Unauthorized changes could make the company non-compliant and put the company at risk for fines and other penalties. The natural response of many employers is to tighten the reins, reassert control over employees, and become even more strict with IT policy.
So, whether you embrace all the good and bad of a cloud-computing infrastructure, or opt for a blend of both (WideOrbit often combines integrations between secure low-latency on-premise systems with local hyper-converged infrastructure, AWS, and Google Compute Cloud), remember the goal is always about lowering risk and improving the customer experience, while reducing IT overhead.
Eric Moe is our Chief Technology Officer. He has been by Eric Mathewson’s side since 2001 and owns all hardware integration and advanced systems development. Prior to joining WideOrbit, “Moe” co-founded OpenTable.com and consulted for brands such as Silicon Graphics, Safeway and Kraft. Visit https://www.wideorbit.com/hostingservices/ to learn more about our managed hosting services.
Coming up next
Part 5 of 5: There is more to success than just good timing and industry-leading software. Vision without execution is simply hallucination.