If you were going to build a brand new application today, your approach would probably be fundamentally different than five or ten years ago. And I do mean fundamentally, as in the fundaments of the architecture would be different. In the last ten years we have moved rapidly from traditional three-tier applications to 12-factor apps using microservices, and now things are shifting again to serverless. That’s all well and good for any business looking to build a new application, but what about organizations that have traditional applications? I’ve also heard them called legacy or heritage applications. These applications are deeply ingrained in the business and are often what is actually generating the bulk of a company’s revenue. The company cannot survive without these applications, and modernizing them will be costly and fraught with risk. Due to the inherent risk, most companies opt to either keep these applications running on-premises or move them as-is to the public cloud, aka life and shift. That’s the reality we’re living with today, but tomorrow is knocking on the door and promising hybrid cloud to fix all this. What’s the reality and what’s the hype? And what is the most likely journey for most companies?
There was a recent article on CNBC talking about how AWS is creating new solutions that could potentially put existing companies out of business. That prompted a tweet from Matthew Prince about how if you are running your company on AWS, you are feeding them data about how to beat you. Prince is certainly not the first to make this leap of logic, and I am certain he won’t be the last. During the Microsoft Ignite Keynote this year, Satya Nadella said something similar about choosing a public cloud partner. Here’s the direct quote:
“If you’re dependent on a provider, who through some game theory construct, is providing you a commodity on one end, only to compete with you on another end. Then you could be making another strategic mistake.”
Is any of this true? Does is matter if you run your company on AWS or Azure? Is AWS or Amazon going to destroy your business? I have some thoughts.
This week was AWS re:Invent, and I watched the keynote live-on Wednesday.
The three. hour. keynote.
During which Andy Jassy announced new features at a pace that is frankly astounding. Three hours should be too long for a keynote, and if I wasn’t watching from the comfort of my office, it would have been. Not only did the announcements keep unfolding for the full 270 minutes, but some didn’t even make it into the keynote. Running a three hour keynote is tough, creating enough new services and features to overflow a three hour keynote is amazing. My hat goes off to the engineering teams at AWS. It is truly staggering what you manage to accomplish each year.
AWS and Azure have won the public cloud race. Some might think that it’s too early to call it. But some might also be wrong. The fact is, AWS is the 10k lbs. gorilla in the market, and Azure is the alternative for those who don’t want to use AWS. We can argue about the potential technological superiority of Oracle Cloud’s claimed L2 networking, or the Machine Learning capabilities of Google Cloud. This is not an argument about technological superiority. We all know that the best product doesn’t always win. There are a lot of other factors involved, including things like: getting to the market first, removing friction for adoption, and being good enough. My point being, before you start telling me that the Cloud Spanner DB is better than Cosmos DB – just know that while you may be right, it doesn’t actually matter that you’re right.
Now that AWS and Azure have won the top seats in public cloud, I’ve started thinking about what is next. Is it possible for a competitor to come in and supplant one of these behemoths? Could someone just graduating from MIT be preparing to create a company that will displace AWS as king of the public cloud? Or will it be some other, hitherto, unknown technology that will bring a sea change to the tech industry – in the same way that public cloud is slowly replacing traditional datacenter architecture and approaches? I think we might be able to look back at the past to get a sense of what is lurking in our future.
A Little History
I’ve been doing this tech thing for a little while now. Not nearly as long as some, but long enough to see some massive shifts over the last 20-30 years. The first major shift was the move from mainframes to x86 servers. My first job had me supporting Windows NT boxes and an AS/400. The company was slowly shifting all of the software from the AS/400 to Windows NT 4.0, and the main drivers where obvious. The hardware was cheaper, the operating system was simpler to administrate, and it made applications more modular. It was a process of decoupling and dissaggregating the hardware from the software. Instead of specialized hardware, running specialized operating systems, and highly customized applications – there was now a way to purchase commodity hardware from one vendor, an operating system from another vendor, and an off-the-shelf application from another. This was revolutionary.
Six years later I watched the same process begin with VMware and virtualization. The main driver here was two-fold: a massive improvement in hardware utilization and a further dissaggregation of hardware from software. Now an operating system was no longer tightly coupled to the underlying hardware. The hardware had been abstracted by a hypervisor, and so the operating system could be shifted around in the datacenter, deployed as a template, and use specialized backup tools like snapshots. VMware’s introduction of VMotion changed the way that datacenters would function forever. This was revolutionary as well.
While VMware was strutting its stuff all around town, AWS released their first public cloud offering, the Simple Queue Service. And… most people didn’t really care. Some developers did though, and if you were looking for a public service to provide queuing for your applications, SQS was there to take your money. Things really started moving with the introduction of the Simple Storage Service (S3), which provided almost infinite storage of items in the cloud and the ability to host a static website. Some of the final pieces of the puzzle were Elastic Compute Cloud (EC2) and the Virtual Private Cloud (VPC). It was now possible to use someone else’s datacenter to host your workloads, without drastic modifications to your application. The management and monitoring of the underlying resources was no longer your problem, and the capital expenditure required to start a new tech company became effectively zero.
And that leads us to where we are today. Public cloud is all the rage. AWS was first to market with viable services that were easy to consume. Microsoft eventually came around to the idea with Azure, and then went ALL IN on being a cloud company. They are the one and two, and at this point everyone else is an also-ran.
King of the Hill
Being the biggest and baddest in the industry only works so long as the industry stays where you are. Ask IBM about AIX and their big iron. They still have revenue coming in from those business units, but it’s a far cry from when they were the King of the Hill. They ceded their crown to the x86 platform and Windows and Linux operating systems. An abstraction was created between the hardware and the operating system, and while IBM continued to produce hardware, they no longer owned the whole stack. For a long time Microsoft was high on its horse, but eventually VMware came along and was able to create a unique position in the market. Instead of trying to fight Microsoft for dominance of the operating system, instead they created a new abstraction layer in the stack and staked their indelible claim on it. There’s also KVM, and Xen, and Hyper-V, but let’s face it. VMware is the king of the virtualization hill. And then along came public cloud, and they did exactly what VMware had done. Instead of fighting VMware on their turf, they created a new battleground and staked their claim.
The next company to displace AWS as King of the Hill will need to find a way to switch up the game once again. They will need to find a way to change the value proposition to favor their solution either as a replacement to public cloud – in the same way that x86 replaced big iron – or as a new layer of abstraction to ride the public cloud – in the way that VMware created a new layer with virtualization. What could that look like?
When containers started gaining prominence with Docker, I thought to myself that we were witnessing VMware all over again. This was another abstraction to further divorce applications from the underlying hardware and operating systems they run on. Still, while containers were great for developers, they were and are a nightmare for operations folks. Enter Kubernetes and a host of other technologies aimed at operationalizing the deployment and management of containers. These technologies create a new layer in the stack, rather that replacing a component of the stack with a new technology. Containers and Kubernetes are available in all the major public clouds and in your datacenter. They aren’t going to replace operating systems, virtualization, or physical hardware platforms. It’s another layer, and in that regard they will not be what displaces AWS and public cloud.
What is the value that public cloud brings to an organization? There are several, but I think it mostly comes back to the idea that running a datacenter and services to support your software is usually not a key differentiator or core competency for most companies. That’s not to say that they cannot do these things, it simply means that it does not create significant competitive advantages for the company or provide some essential distinction that makes them more attractive than other companies in their market. Let’s say you are running a real estate agency, and your office workers and clients rely on a web application to list and review properties. Does it create significant value if you host that application in your own private datacenter or in AWS? Chances are the answer is no. Is your IT staff more efficient than a public cloud datacenter? Again, the answer is likely no. Is you datacenter infrastructure more reliable or secure than the public cloud? No. I’m just going to flat out say, it is not.
So what is the key differentiator about your web application? It’s the application itself. It makes more sense to focus your time and money on creating the best possible application for your clients and employees, than spending time running a datacenter. Build what makes you different, buy what doesn’t. That’s some solid advice from Geoffrey Moore, and it’s the main reason that AWS has become such a Goliath in the industry.
How can a new company change the value proposition so that public cloud isn’t the answer? By creating a platform that makes it even easier to buy those things that don’t provide a competitive advantage for a company. And by solving a problem that public clouds are struggling with today. What is that problem? One is data locality.
A Thought Experiment
I think I’ve spent enough words framing the situation. If a company wants to displace AWS, they are going to need to provide a better value proposition, make adoption seamless, and solve an issue companies are facing today that AWS cannot solve. What could such a solution be. Well here’s one.
Imagine that there’s a company called Quantum Entanglement Data (QED) that was just started by some grad students from China. They have been working on a technology to create quantum entangled particles that can stay in synchronization and transmit data instantaneously over large distances. Let’s suspend disbelief for a moment, after all this is something that should be theoretically possible. The group has found a way to not only get the technology working, but also a way to produce it reliably at scale for a relatively low cost. The bandwidth of each QED-bit is only 10Mb/s, but they are working on running multiple QED-bits in parallel to reach bandwidth speeds of up to 10Gb/s in two years. This is a technology that is going to revolutionize everything.
Why would this be such a massive breakthrough? Because of data gravity. Moving data is hard, and moving lots of data is harder. As a result, applications tend to move towards where the data is being stored, as if the data had a gravitational pull. Why is it hard to move data? Mostly because the lanes in which data can travel slow down significantly as they change mediums. The fastest data exchange happens between the processor and the L1 cache, where things can hit over 1TB/s with nanosecond latency. Zoom out to the SAS bus where most SSDs live and now you’re looking at something closer 12GB/s and 10x the latency. A network link in a datacenter is probably hitting 100Gb/s (which is roughly 12GB/s) with latency measured in the low millisecond range. Move outside of the datacenter and now you’re look at lines running at 10Gb/s if you’re lucky, and more likely in the 100Mb/s range. Latency now becomes a function of distance – you can’t make light travel faster – and how many devices have to process the signal. Over a hundred milliseconds wouldn’t be unreasonable.
The technology that QED is talking about would remove the latency – remember, these are quantum entangled particles. And the more QED-bits that run in parallel, the more bandwidth is available to move data. This would lead to a situation where distance is no longer a major barrier to moving data and accessing data. An application can be massively distributed, with no concern on data access latency to a back-end database. Likewise, a back-end database could replicate itself synchronously between multiple locations, which if you didn’t already know, is a helluva a challenge. If your computer had a QED-bit in its storage sub-system, you could have a corresponding bit in multiple datacenters and have instant access to any of those data sources with no latency.
QED would have invented a technology that completely changes the technology landscape. They could patent their technology, build datacenters, and start providing service. The whole paradigm of public cloud would be shifted to the edge overnight. Decentralized applications – already an important movement today – would explode along with the integration of QED-bits into devices. The need for reliable connectivity through cellular lines, cable internet, fiber optics, etc. would be removed. The need for massive datacenters full of storage and compute would seem like putting all one’s eggs in a single basket creating unnecessary risks. The change wouldn’t happen overnight, but it would completely change the tech industry over the course of just a few years.
All About the Benjamins
That’s just a quick example. I have no special crystal ball that tells me what major tech innovation is coming next. If I did, I would be a rich investor and not a humble blogger. What I do know is that whatever the next sea change is, it will be driven by changing the value proposition for technology. If I were trying to move the needle, I would look for problems that the public cloud is struggling to solve today, and develop an innovative solution that creates more value for public cloud consumers than what they are doing today.
I don’t think that change is too far away either. The public cloud has risen to prominence in the last two or three years, and in the next five years there will be some type of technology that changes the game. It will probably start slow, like SQS or GSX, and then – like the proverbial snowball – grow exponentially until it displaces public cloud or subsumes it. I kind of hope it will be something like the QED-bit, but I doubt we’ll be that lucky.
Recently at two separate client meetings, I heard two people say that they hate Microsoft. As you may have guessed, both people were happily using some type of Apple device, and were complaining about having to use some form of Microsoft technology. In one case it was Office 365, and the other was Active Directory. And listen everyone, I get it. Sometimes I hate Microsoft too. Except not really.