Hybrid Cloud is the On Target for 2019

If you were going to build a brand new application today, your approach would probably be fundamentally different than five or ten years ago. And I do mean fundamentally, as in the fundaments of the architecture would be different. In the last ten years we have moved rapidly from traditional three-tier applications to 12-factor apps using microservices, and now things are shifting again to serverless. That’s all well and good for any business looking to build a new application, but what about organizations that have traditional applications? I’ve also heard them called legacy or heritage applications. These applications are deeply ingrained in the business and are often what is actually generating the bulk of a company’s revenue. The company cannot survive without these applications, and modernizing them will be costly and fraught with risk. Due to the inherent risk, most companies opt to either keep these applications running on-premises or move them as-is to the public cloud, aka life and shift. That’s the reality we’re living with today, but tomorrow is knocking on the door and promising hybrid cloud to fix all this. What’s the reality and what’s the hype? And what is the most likely journey for most companies?

Continue reading “Hybrid Cloud is the On Target for 2019”

Is AWS going to destroy your business?

There was a recent article on CNBC talking about how AWS is creating new solutions that could potentially put existing companies out of business. That prompted a tweet from Matthew Prince about how if you are running your company on AWS, you are feeding them data about how to beat you. Prince is certainly not the first to make this leap of logic, and I am certain he won’t be the last. During the Microsoft Ignite Keynote this year, Satya Nadella said something similar about choosing a public cloud partner. Here’s the direct quote:

“If you’re dependent on a provider, who through some game theory construct, is providing you a commodity on one end, only to compete with you on another end. Then you could be making another strategic mistake.”

Is any of this true? Does is matter if you run your company on AWS or Azure? Is AWS or Amazon going to destroy your business? I have some thoughts.

Continue reading “Is AWS going to destroy your business?”

AWS Outposts and Azure Stack

This week was AWS re:Invent, and I watched the keynote live-on Wednesday.

The three. hour. keynote.

During which Andy Jassy announced new features at a pace that is frankly astounding. Three hours should be too long for a keynote, and if I wasn’t watching from the comfort of my office, it would have been. Not only did the announcements keep unfolding for the full 270 minutes, but some didn’t even make it into the keynote. Running a three hour keynote is tough, creating enough new services and features to overflow a three hour keynote is amazing. My hat goes off to the engineering teams at AWS. It is truly staggering what you manage to accomplish each year.

Continue reading “AWS Outposts and Azure Stack”

What happens after public cloud?

AWS and Azure have won the public cloud race. Some might think that it’s too early to call it. But some might also be wrong. The fact is, AWS is the 10k lbs. gorilla in the market, and Azure is the alternative for those who don’t want to use AWS. We can argue about the potential technological superiority of Oracle Cloud’s claimed L2 networking, or the Machine Learning capabilities of Google Cloud. This is not an argument about technological superiority. We all know that the best product doesn’t always win. There are a lot of other factors involved, including things like: getting to the market first, removing friction for adoption, and being good enough. My point being, before you start telling me that the Cloud Spanner DB is better than Cosmos DB – just know that while you may be right, it doesn’t actually matter that you’re right.

Now that AWS and Azure have won the top seats in public cloud, I’ve started thinking about what is next. Is it possible for a competitor to come in and supplant one of these behemoths? Could someone just graduating from MIT be preparing to create a company that will displace AWS as king of the public cloud? Or will it be some other, hitherto, unknown technology that will bring a sea change to the tech industry – in the same way that public cloud is slowly replacing traditional datacenter architecture and approaches? I think we might be able to look back at the past to get a sense of what is lurking in our future.

A Little History

I’ve been doing this tech thing for a little while now. Not nearly as long as some, but long enough to see some massive shifts over the last 20-30 years. The first major shift was the move from mainframes to x86 servers. My first job had me supporting Windows NT boxes and an AS/400. The company was slowly shifting all of the software from the AS/400 to Windows NT 4.0, and the main drivers where obvious. The hardware was cheaper, the operating system was simpler to administrate, and it made applications more modular. It was a process of decoupling and dissaggregating the hardware from the software. Instead of specialized hardware, running specialized operating systems, and highly customized applications – there was now a way to purchase commodity hardware from one vendor, an operating system from another vendor, and an off-the-shelf application from another. This was revolutionary.

Six years later I watched the same process begin with VMware and virtualization. The main driver here was two-fold: a massive improvement in hardware utilization and a further dissaggregation of hardware from software. Now an operating system was no longer tightly coupled to the underlying hardware. The hardware had been abstracted by a hypervisor, and so the operating system could be shifted around in the datacenter, deployed as a template, and use specialized backup tools like snapshots. VMware’s introduction of VMotion changed the way that datacenters would function forever. This was revolutionary as well.

While VMware was strutting its stuff all around town, AWS released their first public cloud offering, the Simple Queue Service. And… most people didn’t really care. Some developers did though, and if you were looking for a public service to provide queuing for your applications, SQS was there to take your money. Things really started moving with the introduction of the Simple Storage Service (S3), which provided almost infinite storage of items in the cloud and the ability to host a static website. Some of the final pieces of the puzzle were Elastic Compute Cloud (EC2) and the Virtual Private Cloud (VPC). It was now possible to use someone else’s datacenter to host your workloads, without drastic modifications to your application. The management and monitoring of the underlying resources was no longer your problem, and the capital expenditure required to start a new tech company became effectively zero.

And that leads us to where we are today. Public cloud is all the rage. AWS was first to market with viable services that were easy to consume. Microsoft eventually came around to the idea with Azure, and then went ALL IN on being a cloud company. They are the one and two, and at this point everyone else is an also-ran.

King of the Hill

Being the biggest and baddest in the industry only works so long as the industry stays where you are. Ask IBM about AIX and their big iron. They still have revenue coming in from those business units, but it’s a far cry from when they were the King of the Hill. They ceded their crown to the x86 platform and Windows and Linux operating systems. An abstraction was created between the hardware and the operating system, and while IBM continued to produce hardware, they no longer owned the whole stack. For a long time Microsoft was high on its horse, but eventually VMware came along and was able to create a unique position in the market. Instead of trying to fight Microsoft for dominance of the operating system, instead they created a new abstraction layer in the stack and staked their indelible claim on it. There’s also KVM, and Xen, and Hyper-V, but let’s face it. VMware is the king of the virtualization hill. And then along came public cloud, and they did exactly what VMware had done. Instead of fighting VMware on their turf, they created a new battleground and staked their claim.

The next company to displace AWS as King of the Hill will need to find a way to switch up the game once again. They will need to find a way to change the value proposition to favor their solution either as a replacement to public cloud – in the same way that x86 replaced big iron – or as a new layer of abstraction to ride the public cloud – in the way that VMware created a new layer with virtualization. What could that look like?

Kubernetes?

When containers started gaining prominence with Docker, I thought to myself that we were witnessing VMware all over again. This was another abstraction to further divorce applications from the underlying hardware and operating systems they run on. Still, while containers were great for developers, they were and are a nightmare for operations folks. Enter Kubernetes and a host of other technologies aimed at operationalizing the deployment and management of containers. These technologies create a new layer in the stack, rather that replacing a component of the stack with a new technology. Containers and Kubernetes are available in all the major public clouds and in your datacenter. They aren’t going to replace operating systems, virtualization, or physical hardware platforms. It’s another layer, and in that regard they will not be what displaces AWS and public cloud.

The Value

What is the value that public cloud brings to an organization? There are several, but I think it mostly comes back to the idea that running a datacenter and services to support your software is usually not a key differentiator or core competency for most companies. That’s not to say that they cannot do these things, it simply means that it does not create significant competitive advantages for the company or provide some essential distinction that makes them more attractive than other companies in their market. Let’s say you are running a real estate agency, and your office workers and clients rely on a web application to list and review properties. Does it create significant value if you host that application in your own private datacenter or in AWS? Chances are the answer is no. Is your IT staff more efficient than a public cloud datacenter? Again, the answer is likely no. Is you datacenter infrastructure more reliable or secure than the public cloud? No. I’m just going to flat out say, it is not.

So what is the key differentiator about your web application? It’s the application itself. It makes more sense to focus your time and money on creating the best possible application for your clients and employees, than spending time running a datacenter. Build what makes you different, buy what doesn’t. That’s some solid advice from Geoffrey Moore, and it’s the main reason that AWS has become such a Goliath in the industry.

How can a new company change the value proposition so that public cloud isn’t the answer? By creating a platform that makes it even easier to buy those things that don’t provide a competitive advantage for a company. And by solving a problem that public clouds are struggling with today. What is that problem? One is data locality.

A Thought Experiment

I think I’ve spent enough words framing the situation. If a company wants to displace AWS, they are going to need to provide a better value proposition, make adoption seamless, and solve an issue companies are facing today that AWS cannot solve. What could such a solution be. Well here’s one.

Imagine that there’s a company called Quantum Entanglement Data (QED) that was just started by some grad students from China. They have been working on a technology to create quantum entangled particles that can stay in synchronization and transmit data instantaneously over large distances. Let’s suspend disbelief for a moment, after all this is something that should be theoretically possible. The group has found a way to not only get the technology working, but also a way to produce it reliably at scale for a relatively low cost. The bandwidth of each QED-bit is only 10Mb/s, but they are working on running multiple QED-bits in parallel to reach bandwidth speeds of up to 10Gb/s in two years. This is a technology that is going to revolutionize everything.

Why would this be such a massive breakthrough? Because of data gravity. Moving data is hard, and moving lots of data is harder. As a result, applications tend to move towards where the data is being stored, as if the data had a gravitational pull. Why is it hard to move data? Mostly because the lanes in which data can travel slow down significantly as they change mediums. The fastest data exchange happens between the processor and the L1 cache, where things can hit over 1TB/s with nanosecond latency. Zoom out to the SAS bus where most SSDs live and now you’re looking at something closer 12GB/s and 10x the latency. A network link in a datacenter is probably hitting 100Gb/s (which is roughly 12GB/s) with latency measured in the low millisecond range. Move outside of the datacenter and now you’re look at lines running at 10Gb/s if you’re lucky, and more likely in the 100Mb/s range. Latency now becomes a function of distance – you can’t make light travel faster – and how many devices have to process the signal. Over a hundred milliseconds wouldn’t be unreasonable.

The technology that QED is talking about would remove the latency – remember, these are quantum entangled particles. And the more QED-bits that run in parallel, the more bandwidth is available to move data. This would lead to a situation where distance is no longer a major barrier to moving data and accessing data. An application can be massively distributed, with no concern on data access latency to a back-end database. Likewise, a back-end database could replicate itself synchronously between multiple locations, which if you didn’t already know, is a helluva a challenge. If your computer had a QED-bit in its storage sub-system, you could have a corresponding bit in multiple datacenters and have instant access to any of those data sources with no latency.

QED would have invented a technology that completely changes the technology landscape. They could patent their technology, build datacenters, and start providing service. The whole paradigm of public cloud would be shifted to the edge overnight. Decentralized applications – already an important movement today – would explode along with the integration of QED-bits into devices. The need for reliable connectivity through cellular lines, cable internet, fiber optics, etc. would be removed. The need for massive datacenters full of storage and compute would seem like putting all one’s eggs in a single basket creating unnecessary risks. The change wouldn’t happen overnight, but it would completely change the tech industry over the course of just a few years.

All About the Benjamins
That’s just a quick example. I have no special crystal ball that tells me what major tech innovation is coming next. If I did, I would be a rich investor and not a humble blogger. What I do know is that whatever the next sea change is, it will be driven by changing the value proposition for technology. If I were trying to move the needle, I would look for problems that the public cloud is struggling to solve today, and develop an innovative solution that creates more value for public cloud consumers than what they are doing today.

I don’t think that change is too far away either. The public cloud has risen to prominence in the last two or three years, and in the next five years there will be some type of technology that changes the game. It will probably start slow, like SQS or GSX, and then – like the proverbial snowball – grow exponentially until it displaces public cloud or subsumes it. I kind of hope it will be something like the QED-bit, but I doubt we’ll be that lucky.

Terraform – FotD – Wrap Up

This is the final post of an ongoing series of posts documenting the built-in interpolation functions in Terraform. For more information, check out the beginning post. In this post I would like to take a moment to review the project as a whole, mung together some semi-coherent thoughts about the functions, and maybe even make a plan for the future.

Thoughts on the project

In June of this year, I undertook a project to examine each built-in interpolation function within Terraform. My plan was to examine a function each day – weekends excluded – and document how the function works, why it might be useful, and any interesting tidbits I discovered along the way. There were a few drivers behind the project:

  • I was not consistently blogging, and taking on a daily blogging project would force me to do so.
  • There were several functions that I did not understand in Terraform, and this seemed like a good way to learn.
  • I’ve got a couple courses on Pluralsight dealing with Terraform and I thought this would help when I revise them.

I’d like to say that I had thought this whole process through and started my first post with the whole series and format laid out. I’d like to say that, but it wouldn’t be the truth. The first few posts had me working through the format of the post and the way in which I wanted to create examples. It also took me a few iterations to figure out how I wanted to structure the GitHub repo, and how to effectively create new examples with minimum effort. After the first few posts I got into a groove. I also realized that this was going to be a significant undertaking in terms of time. When I started the posts, there were about 63 functions, and now there are 65. Working at five functions a week, the whole thing should have taken about 13 weeks to complete. That’s three months of posting, five days a week. Considering I started in June and finished in September, three months was exactly right. It wasn’t till the end of the first week that I realized that I had made a three month commitment, and once I did realize that, I started to look for ways to make the process sustainable. Using a basic template for posting and examples helped a lot. I also didn’t get too ambitious with the number of examples for each function. I wouldn’t say that the process was easy, and I don’t know if I would do it again. But I will say that I learned a lot about Terraform and blogging.

Thoughts on Terraform functions

If I could start the whole project over again, I would have grouped the functions by what they do, and not alphabetically. It makes sense to put all the functions dealing with strings together, and same with lists and maps. Then there are the functions that take no arguments. Basically, if you are looking for a function to assist with string manipulation, then it would be much more helpful if all of those types of functions were grouped together. So here’s my attempt at doing exactly that:

Terraform Interpolation Function Cheat Sheet

String functions

Number and date functions

List functions

Map functions

File and path functions

Network functions

Security and web functions

Thoughts on Terraform and the future

While writing all of these posts, I did discover some things I would like to see changed and improved. For instance, the documentation page for the built-in functions is listed alphabetically for the most part. But there are a few examples of it not being entirely alphabetical. That’s a minor thing to most. As the son of a librarian, that’s the type of thing that really gets stuck in my craw. More importantly, I think the functions should be grouped by loose affiliation; the way I just listed them out. There were also a few functions that I thought were either missing functionality, or could be improved in some way. I don’t want to be the person just complaining about things, I want to be part of the solution as well. The whole project is open source, including the docs. The website is written in markdown, and I can easily update the page with the built-in functions and submit a pull request. In fact, I plan to do that. The functions are written in Golang, and that makes things a little trickier since I am not a developer by trade. To that end, I have started learning Golang in order to assist with improving the functions. My first attempt was adding a function called dateadd that would add the ability to manipulate dates in the same way that timeadd deals with time. After submitting my pull request, I had some good conversations with one of the maintainers about whether that should be a separate function or added to the timeadd function. We agreed on adding it to the timeadd function and I am working on learning enough Go to make that a reality.

Terraform v0.12 will be dropping in the near future. It is going to bring with it a lot of enhancements to the Hashicorp Configuration Language, and the way that Terraform functions overall. There will be breaking changes, but it will all be in the name of creating a better product. Terraform is becoming more widely adopted by the moment, and it’s probably best to make the necessary changes now before this juggernaut picks up any more steam. I’m sure once the new version drops I will need to revise my courses on Pluralsight and maybe review some new functions here. I’m excited for the future and what it might bring!