slide

What happens after public cloud?

Ned Bellavance
11 min read

Cover

AWS and Azure have won the public cloud race. Some might think that it’s too early to call it. But some might also be wrong. The fact is, AWS is the 10k lbs. gorilla in the market, and Azure is the alternative for those who don’t want to use AWS. We can argue about the potential technological superiority of Oracle Cloud’s claimed L2 networking, or the Machine Learning capabilities of Google Cloud. This is not an argument about technological superiority. We all know that the best product doesn’t always win. There are a lot of other factors involved, including things like: getting to the market first, removing friction for adoption, and being good enough. My point being, before you start telling me that the Cloud Spanner DB is better than Cosmos DB - just know that while you may be right, it doesn’t actually matter that you’re right.

Now that AWS and Azure have won the top seats in public cloud, I’ve started thinking about what is next. Is it possible for a competitor to come in and supplant one of these behemoths? Could someone just graduating from MIT be preparing to create a company that will displace AWS as king of the public cloud? Or will it be some other, hitherto, unknown technology that will bring a sea change to the tech industry - in the same way that public cloud is slowly replacing traditional datacenter architecture and approaches? I think we might be able to look back at the past to get a sense of what is lurking in our future.

A Little History

I’ve been doing this tech thing for a little while now. Not nearly as long as some, but long enough to see some massive shifts over the last 20-30 years. The first major shift was the move from mainframes to x86 servers. My first job had me supporting Windows NT boxes and an AS/400. The company was slowly shifting all of the software from the AS/400 to Windows NT 4.0, and the main drivers where obvious. The hardware was cheaper, the operating system was simpler to administrate, and it made applications more modular. It was a process of decoupling and dissaggregating the hardware from the software. Instead of specialized hardware, running specialized operating systems, and highly customized applications - there was now a way to purchase commodity hardware from one vendor, an operating system from another vendor, and an off-the-shelf application from another. This was revolutionary.

Six years later I watched the same process begin with VMware and virtualization. The main driver here was two-fold: a massive improvement in hardware utilization and a further dissaggregation of hardware from software. Now an operating system was no longer tightly coupled to the underlying hardware. The hardware had been abstracted by a hypervisor, and so the operating system could be shifted around in the datacenter, deployed as a template, and use specialized backup tools like snapshots. VMware’s introduction of VMotion changed the way that datacenters would function forever. This was revolutionary as well.

While VMware was strutting its stuff all around town, AWS released their first public cloud offering, the Simple Queue Service. And… most people didn’t really care. Some developers did though, and if you were looking for a public service to provide queuing for your applications, SQS was there to take your money. Things really started moving with the introduction of the Simple Storage Service (S3), which provided almost infinite storage of items in the cloud and the ability to host a static website. Some of the final pieces of the puzzle were Elastic Compute Cloud (EC2) and the Virtual Private Cloud (VPC). It was now possible to use someone else’s datacenter to host your workloads, without drastic modifications to your application. The management and monitoring of the underlying resources was no longer your problem, and the capital expenditure required to start a new tech company became effectively zero.

And that leads us to where we are today. Public cloud is all the rage. AWS was first to market with viable services that were easy to consume. Microsoft eventually came around to the idea with Azure, and then went ALL IN on being a cloud company. They are the one and two, and at this point everyone else is an also-ran.

King of the Hill

Being the biggest and baddest in the industry only works so long as the industry stays where you are. Ask IBM about AIX and their big iron. They still have revenue coming in from those business units, but it’s a far cry from when they were the King of the Hill. They ceded their crown to the x86 platform and Windows and Linux operating systems. An abstraction was created between the hardware and the operating system, and while IBM continued to produce hardware, they no longer owned the whole stack. For a long time Microsoft was high on its horse, but eventually VMware came along and was able to create a unique position in the market. Instead of trying to fight Microsoft for dominance of the operating system, instead they created a new abstraction layer in the stack and staked their indelible claim on it. There’s also KVM, and Xen, and Hyper-V, but let’s face it. VMware is the king of the virtualization hill. And then along came public cloud, and they did exactly what VMware had done. Instead of fighting VMware on their turf, they created a new battleground and staked their claim.

The next company to displace AWS as King of the Hill will need to find a way to switch up the game once again. They will need to find a way to change the value proposition to favor their solution either as a replacement to public cloud - in the same way that x86 replaced big iron - or as a new layer of abstraction to ride the public cloud - in the way that VMware created a new layer with virtualization. What could that look like?

Kubernetes?

When containers started gaining prominence with Docker, I thought to myself that we were witnessing VMware all over again. This was another abstraction to further divorce applications from the underlying hardware and operating systems they run on. Still, while containers were great for developers, they were and are a nightmare for operations folks. Enter Kubernetes and a host of other technologies aimed at operationalizing the deployment and management of containers. These technologies create a new layer in the stack, rather that replacing a component of the stack with a new technology. Containers and Kubernetes are available in all the major public clouds and in your datacenter. They aren’t going to replace operating systems, virtualization, or physical hardware platforms. It’s another layer, and in that regard they will not be what displaces AWS and public cloud.

The Value

What is the value that public cloud brings to an organization? There are several, but I think it mostly comes back to the idea that running a datacenter and services to support your software is usually not a key differentiator or core competency for most companies. That’s not to say that they cannot do these things, it simply means that it does not create significant competitive advantages for the company or provide some essential distinction that makes them more attractive than other companies in their market. Let’s say you are running a real estate agency, and your office workers and clients rely on a web application to list and review properties. Does it create significant value if you host that application in your own private datacenter or in AWS? Chances are the answer is no. Is your IT staff more efficient than a public cloud datacenter? Again, the answer is likely no. Is you datacenter infrastructure more reliable or secure than the public cloud? No. I’m just going to flat out say, it is not.

So what is the key differentiator about your web application? It’s the application itself. It makes more sense to focus your time and money on creating the best possible application for your clients and employees, than spending time running a datacenter. Build what makes you different, buy what doesn’t. That’s some solid advice from Geoffrey Moore, and it’s the main reason that AWS has become such a Goliath in the industry.

How can a new company change the value proposition so that public cloud isn’t the answer? By creating a platform that makes it even easier to buy those things that don’t provide a competitive advantage for a company. And by solving a problem that public clouds are struggling with today. What is that problem? One is data locality.

A Thought Experiment

I think I’ve spent enough words framing the situation. If a company wants to displace AWS, they are going to need to provide a better value proposition, make adoption seamless, and solve an issue companies are facing today that AWS cannot solve. What could such a solution be. Well here’s one.

Imagine that there’s a company called Quantum Entanglement Data (QED) that was just started by some grad students from China. They have been working on a technology to create quantum entangled particles that can stay in synchronization and transmit data instantaneously over large distances. Let’s suspend disbelief for a moment, after all this is something that should be theoretically possible. The group has found a way to not only get the technology working, but also a way to produce it reliably at scale for a relatively low cost. The bandwidth of each QED-bit is only 10Mb/s, but they are working on running multiple QED-bits in parallel to reach bandwidth speeds of up to 10Gb/s in two years. This is a technology that is going to revolutionize everything.

Why would this be such a massive breakthrough? Because of data gravity. Moving data is hard, and moving lots of data is harder. As a result, applications tend to move towards where the data is being stored, as if the data had a gravitational pull. Why is it hard to move data? Mostly because the lanes in which data can travel slow down significantly as they change mediums. The fastest data exchange happens between the processor and the L1 cache, where things can hit over 1TB/s with nanosecond latency. Zoom out to the SAS bus where most SSDs live and now you’re looking at something closer 12GB/s and 10x the latency. A network link in a datacenter is probably hitting 100Gb/s (which is roughly 12GB/s) with latency measured in the low millisecond range. Move outside of the datacenter and now you’re look at lines running at 10Gb/s if you’re lucky, and more likely in the 100Mb/s range. Latency now becomes a function of distance - you can’t make light travel faster - and how many devices have to process the signal. Over a hundred milliseconds wouldn’t be unreasonable.

The technology that QED is talking about would remove the latency - remember, these are quantum entangled particles. And the more QED-bits that run in parallel, the more bandwidth is available to move data. This would lead to a situation where distance is no longer a major barrier to moving data and accessing data. An application can be massively distributed, with no concern on data access latency to a back-end database. Likewise, a back-end database could replicate itself synchronously between multiple locations, which if you didn’t already know, is a helluva a challenge. If your computer had a QED-bit in its storage sub-system, you could have a corresponding bit in multiple datacenters and have instant access to any of those data sources with no latency.

QED would have invented a technology that completely changes the technology landscape. They could patent their technology, build datacenters, and start providing service. The whole paradigm of public cloud would be shifted to the edge overnight. Decentralized applications - already an important movement today - would explode along with the integration of QED-bits into devices. The need for reliable connectivity through cellular lines, cable internet, fiber optics, etc. would be removed. The need for massive datacenters full of storage and compute would seem like putting all one’s eggs in a single basket creating unnecessary risks. The change wouldn’t happen overnight, but it would completely change the tech industry over the course of just a few years.

All About the Benjamins That’s just a quick example. I have no special crystal ball that tells me what major tech innovation is coming next. If I did, I would be a rich investor and not a humble blogger. What I do know is that whatever the next sea change is, it will be driven by changing the value proposition for technology. If I were trying to move the needle, I would look for problems that the public cloud is struggling to solve today, and develop an innovative solution that creates more value for public cloud consumers than what they are doing today.

I don’t think that change is too far away either. The public cloud has risen to prominence in the last two or three years, and in the next five years there will be some type of technology that changes the game. It will probably start slow, like SQS or GSX, and then - like the proverbial snowball - grow exponentially until it displaces public cloud or subsumes it. I kind of hope it will be something like the QED-bit, but I doubt we’ll be that lucky.