slide

Hybrid cloud is on target for 2019

Ned Bellavance
7 min read

Cover

If you were going to build a brand new application today, your approach would probably be fundamentally different than five or ten years ago. And I do mean fundamentally, as in the fundaments of the architecture would be different. In the last ten years we have moved rapidly from traditional three-tier applications to 12-factor apps using microservices, and now things are shifting again to serverless. That’s all well and good for any business looking to build a new application, but what about organizations that have traditional applications? I’ve also heard them called legacy or heritage applications. These applications are deeply ingrained in the business and are often what is actually generating the bulk of a company’s revenue. The company cannot survive without these applications, and modernizing them will be costly and fraught with risk. Due to the inherent risk, most companies opt to either keep these applications running on-premises or move them as-is to the public cloud, aka life and shift. That’s the reality we’re living with today, but tomorrow is knocking on the door and promising hybrid cloud to fix all this. What’s the reality and what’s the hype? And what is the most likely journey for most companies?

First we have to define some terms. The original meaning of hybrid cloud referred to running an application with some components running in the public cloud, and some running in a private cloud. What does that look like? Well, let’s say that you have a company called OnTarget that makes paper targets for archery and firing ranges. You have a B2C website that runs in your on-premises datacenter with a web front-end, an application middle tier, and a database backend. Your busiest time is the ramp up to summer camp in the late Spring, and your website gets slammed with requests. Rather than buy the necessary capacity to handle your peak load, you’ve decided to try and burst to AWS with EC2 instances running your website. That was the original vision behind hybrid cloud, stretching the application across public and private cloud. However, you run some load testing on your website with the EC2 instances in place, and discover that the latency between the EC2 instances in AWS and the application and DB tiers on-prem is just too high. Looks like bursting to the cloud isn’t going to work. And you’re not alone in this discovery. The idea of stretching an application may look attractive on a balance sheet, but in practice it usually doesn’t work well.

The newer version of hybrid cloud doesn’t require this stretching of an application. Instead, the idea is to have a consistent deployment methodology and set of services being utilized in both the public and private cloud. Running with the OnTarget example, let’s say you’ve decided to move your Production systems up to AWS to take advantage of the scalability and elasticity of the public cloud. You can run your website at a nominal capacity most of the year, and then ramp it up for peak season without having to by enough hardware to cover the maximum demand. The other environments - development, QA, staging - will continue to run on-premises using your existing hardware investment. Your lead software developer wants to start using some of the cloud-native services in AWS, such as Elastic Beanstalk and RDS, but you’re hesitant to do so. Those services don’t exist in your on-prem datacenter, and you want a consistent deployment model across all your application environments. After all, QA is validating the application based on what is running on-prem. If you have to make code changes to deploy the Production version of the application in AWS, then you are invalidating the work that QA has done.

Now you’re faced with a choice. You wanted to keep using the hardware you had on-premises, at least until it has depreciated off your balance sheet. But you also want your application to evolve and improve, by using the native services in AWS. What to do? In the recent past, most companies would end up moving all the application environments to the cloud, and begin the work of transforming their application. There is another choice on the horizon, and it takes the form of the new hybrid cloud. I think there are three options emerging:

  1. Azure Stack and AWS Outposts offer an on-premises version of the public cloud in your datacenter
  2. Managed Kubernetes are creating a container-centric hybrid deployment model
  3. VMware on AWS creates the ability to change nothing and still have hybrid applications

The first option doesn’t allow you to keep using your existing hardware. Both Azure Stack and AWS Outposts require pre-configured hardware from an OEM with Azure Stack or from the vendor directly with AWS Outposts. Either way, you cannot just deploy one of these solutions on your existing hardware. If that is your goal, then option one is not really an option. What both Azure Stack and Outposts bring is the ability to start re-architecting your application to use cloud-native constructs in Azure or AWS, and still be able to run that application on local hardware as needed. You can also use the same deployment and management tool chains for public or private cloud. This option paves the way for the use of serverless in your application. Azure Stack already supports Azure Functions today, and I am 100% certain that Outposts will support Lambda very soon after launch. Taking the broader definition of serverless being any service that you can allocate on demand and do not have to manage the underlying service, then other “serverless” offerings, like DynamoDB and CosmosDB, won’t be far behind either.

The second option would allow you to keep using your existing hardware, assuming it meets the requirements of the managed Kubernetes solution. You could manage it yourself, but that is probably a bad idea. I’m not going to even try and get into Geoffrey Moore’s guidance around Core vs. Context, but suffice to say that OnTarget’s key market differentiator is not managing their own Kubernetes. The great thing about using managed K8s, is that you can re-architect your application in a container-centric way. Once you have packaged up your application with something like Helm, you can deploy it to any Kubernetes solution, including public and private cloud. In the case of Red Hat’s OpenShift, you actually deploy the management components in the public clouds as well (OpenShift on Azure), and now you’ve got the same management and deployment toolkit across all your environments. That certainly hits the mark with hybrid cloud. This option won’t let you start using native cloud services in AWS and Azure in your on-premises environment. There are several projects to use Kubernetes as a base layer for serverless - functions as a service really, so this may become a viable option if you are trying to add FaaS to your application.

The third option allows you to keep the exact same deployment model you have today, assuming you are a VMware shop. You can replicate and migrate existing virtual machines to VMware Cloud on AWS (VMC). As your older hardware ages out, you can move workloads to VMC. AWS Outposts is also offering a flavor of VMC on the Outposts hardware, so you could replace your on-prem hardware with Outposts as it ages out. The bad news is that you won’t be taking advantage of any cloud native services or serverless technologies. VMware does have plans to pursue a managed version of Kubernetes called PKS and PKS Cloud. That begs the question, why wouldn’t you just go with managed Kubernetes and skip VMC altogether?

Which option would you choose at OnTarget? That all depends on your business drivers and IT strategy. Let’s say that OnTarget is looking to move to a container based deployment model for its application. And you’d like to keep using your on-prem hardware for lower environments. Then managed Kubernetes is probably the thing that makes the most sense for you. Since you would also like a consistent deployment model, you might consider deploying Red Hat OpenShift on-prem and in AWS.

That is just one possible scenario, and I think all three of the above options will continue to be viable for the next five years. Beyond that, I suspect we’re going to see Azure Stack and AWS Outposts slowly begin to edge out most other on-premises offerings. Organizations that had performed an early lift and shift to the cloud will choose to repatriate some of their applications on-prem using one of these hybrid offerings. I also expect some kind of partnership to happen between Nutanix and Google Cloud. Nutanix is blowing up, and right now GCP doesn’t have a dance partner at the hybrid mixer.

The second generation of hybrid cloud is well underway, and I think 2019 is going to bring an acceleration of this trend.