Innovate or Die? Three Ways to Cloud Enablement

Last week I participated in Cloud Field Day 3. If you’re not familiar with Cloud Field Day, then I would highly recommend checking out the post from my fellow delegate Nick Janetakis detailing his experience. It’s a thorough and well thought-out post describing what Cloud Field Day is, and why you might be interested.

As I watched each vendor present, I kept coming back to the same set of questions. The focus of this post is one of those. How can companies use the cloud to innovate their current product portfolio? There was a stark difference between those organizations that had embraced the cloud as an enabler of new solutions and those who had instead approached cloud like a check box on a list of things organizations should be doing. Broadly, I think the innovative companies – or the innovative branch of the company – fell into three broad categories.

  1. Those that were “born in the cloud”
  2. Those that purchased an innovative startup
  3. Those that created a center of excellence to embrace innovation

In the first category there were companies like Druva and Morpheus. These vendors got their start in tandem with cloud technologies, and so they naturally glommed onto cloud native approaches to technology. They might not be in a greenfield situation where they can pick the best of breed cloud tech for everything, but they also aren’t saddled with years of accumulated technical debt anchoring them to a legacy mindset and approach. Druva went to great lengths to explain how their SaaS data protection solution was a cloud native offering using AWS technologies to achieve their goals. That presentation actually led to a heated discussion with the delegates over whether you should care about where your SaaS solution is actually running – hint: it depends, but I would say not really. They were trying to show how their solution was architected to be massively scalable, provide unparalleled security, and drive cost optimizations that they pass on to the client.

The second category included a vendor that was actually quite a surprise to me, NetApp. Nothing against storage vendors, but I don’t look to them for any kind of real innovation in the cloud. Too many of the storage behemoths are busy burning marketing cash to explain why Hyperconverged and cloud are not the solution, and how their monolithic storage arrays are the one true and right way to storage nirvana. Simultaneously, they are developing virtualized versions of their software, and branding everything as Software Defined – a term which at this point carries the same weight as “new and improved” on paper towel packaging. So when I arrived at NetApp, I assumed that the presentation would be a conga line of “new and improved” storage products that were “Software Defined” and “Built for the Cloud” whilst all evidence spoke to the contrary. I was happily incorrect! The presentation was from Eiki Hrafnsson who was the CEO of GreenQloud, an Icelandic company purchased by NetApp last year. GreenQloud was focused on developing public cloud platforms, and NetApp purchased them in order to create a whole new solution. It leverages the ONTAP file system from NetApp, but that is where the similarities end. Eiki treated us to demos showing how their Cloud Volumes were able to outperform EBS volumes in AWS, and a full orchestrated container deployment using Kubernetes with Cloud Volumes providing native persistent container storage.

NetApp purchased an innovative company and left them alone to keep doing awesome things. If they can continue with that strategy and let it infect some of the more established portions of the organization, then I see a bright future for NetApp. Whether or not Eiki is still there in 12 months should be a solid indicator of whether NetApp’s old guard has allowed this project to thrive or crushed it with internal polictical machinations.

The third category is exemplified by another unexpected contender, Veritas. Yes, that Veritas. The presentation got off to a rocky start, with Veritas doing exactly what I had expected NetApp to do. They were talking about how their backup products could ship information up to the cloud, which is okay, I guess. Seriously, your backup product being able to use cloud storage is table stakes at best, along the same lines as being able to use VM snapshots. They even started bragging about their new backup appliances and how many petabytes it can hold. The delegates were getting restless, and finally Tim Crawford stopped the presenter and asked him to start over with some additional guidance. It was a tough love moment, but it paid off! Out of nowhere, Veritas brings out two new presenters that jump almost immediately into a demo of a cloud native backup solution that they developed from scratch. This solution, CloudPoint, was capable of backing up cloud workloads like Azure VMs, AWS RDS databases, and more. It was lightweight, and leveraged the built-in backup capabilities of cloud services, while also providing indexing, metadata management, and scheduling. The application ran in a container and had a lightweight UI. The product was clearly unpolished and rough around the edges, but it was a breath of fresh air to what had been a stifling atmosphere of superiority emanating from the Veritas folks. Be humble, stay humble, would be my advice.

What was particularly noteworthy about the Veritas offering is that the team responsible for developing it were pretty new to the organization. It felt as if someone at Veritas had hired the team, and told them to go off and make something cool with the cloud. And they did! I’ve heard of centers of excellence or centers of innovation at companies, and while no one said that’s what they were doing at Veritas, that’s exactly how it felt.

So there you go, three paths to cloud innovation. Be it, buy it, or add it. Just don’t let the haters squash your dreams.

My Life as a Tech Impostor

Before I even knew there was a term, I thought I was an impostor. Not in tech mind you, but in life. I was 11 years old. Over the summer I had visited my first Head Shop, ushered in by my older and ostensibly wiser cousin. I didn’t know what a bong was, or what all these dancing bears were about. The whole place stank of some unknown odor, which I would later be able to identify as a mélange of patchouli, sandalwood, and pot. Mostly pot. What I could dimly sense – as a budding, rebellious teenager – was that this place was cool, and I wanted to be cool. My cousin explained that the dancing bear was in fact a totem of The Grateful Dead, and I vaguely recognized the name from MTV. That would, of course, be the Touch of Grey single that served as an introduction of The Dead to many of my peer group.

Continue reading “My Life as a Tech Impostor”

AzureStack on Azure – Part 3

In the last two parts we deployed an Azure Stack Development Kit on an Azure VM and got it registered with Azure. Then we created an Offer and Plan for the default user and started the download of marketplace items for use on Azure Stack. Now that those items have completed their download, we can move on to the process of installing the Resource Providers (RPs) for Microsoft SQL Server (MSSQL), MySQL Server, and the App Service. In this post I will cover the process and scripts you can use to get the MSSQL and MySQL RPs running. The App Service will be a separate post, due to the additional complexity involved.

Continue reading “AzureStack on Azure – Part 3”

AzureStack on Azure – Part 2

Let the registering begin! So you’ve got a working Azure Stack on Azure, which is like some kind of crazy inception thing. But let’s be honest with each other, there’s not a whole lot to do on Azure Stack at this point. You could like provision a VNet, maybe set up some sweet Resource Groups, but if you want to do something useful – like create some VMs or deploy services – you’re going to want to register your Azure Stack with Azure and syndicate with the marketplace. So let’s go ahead and do that using the Register-AzureStackLAB.ps1.

Continue reading “AzureStack on Azure – Part 2”

Docker Swarm on Azure for Docker Deep Dive

I’ve been working my way through the very excellent Docker Deep Dive book by Nigel Poulton. If you’ve been meaning to get into the whole Docker scene, this is the perfect book to get you started. Nigel doesn’t assume prior container knowledge, and he makes sure that the examples are cross platform and easily executed on your local laptop/desktop/whatever. That is, until you get to the section on Docker Swarm. Now instead of using a single Docker host, a la your local system, you now need six systems – three managers and three worker nodes. It’s entirely possible to spin those up on your local system – provided you have sufficient RAM, but I prefer to use the power of the cloud to get me there. See I might be working through the exercises on my laptop over lunch and then my desktop at night. I’d like to be able to access the cluster from whatever system I am working on, without deploying the cluster two or three times.

Continue reading “Docker Swarm on Azure for Docker Deep Dive”