Innovate or Die? Three Ways to Cloud Enablement

Last week I participated in Cloud Field Day 3. If you’re not familiar with Cloud Field Day, then I would highly recommend checking out the post from my fellow delegate Nick Janetakis detailing his experience. It’s a thorough and well thought-out post describing what Cloud Field Day is, and why you might be interested.

As I watched each vendor present, I kept coming back to the same set of questions. The focus of this post is one of those. How can companies use the cloud to innovate their current product portfolio? There was a stark difference between those organizations that had embraced the cloud as an enabler of new solutions and those who had instead approached cloud like a check box on a list of things organizations should be doing. Broadly, I think the innovative companies – or the innovative branch of the company – fell into three broad categories.

  1. Those that were “born in the cloud”
  2. Those that purchased an innovative startup
  3. Those that created a center of excellence to embrace innovation

In the first category there were companies like Druva and Morpheus. These vendors got their start in tandem with cloud technologies, and so they naturally glommed onto cloud native approaches to technology. They might not be in a greenfield situation where they can pick the best of breed cloud tech for everything, but they also aren’t saddled with years of accumulated technical debt anchoring them to a legacy mindset and approach. Druva went to great lengths to explain how their SaaS data protection solution was a cloud native offering using AWS technologies to achieve their goals. That presentation actually led to a heated discussion with the delegates over whether you should care about where your SaaS solution is actually running – hint: it depends, but I would say not really. They were trying to show how their solution was architected to be massively scalable, provide unparalleled security, and drive cost optimizations that they pass on to the client.

The second category included a vendor that was actually quite a surprise to me, NetApp. Nothing against storage vendors, but I don’t look to them for any kind of real innovation in the cloud. Too many of the storage behemoths are busy burning marketing cash to explain why Hyperconverged and cloud are not the solution, and how their monolithic storage arrays are the one true and right way to storage nirvana. Simultaneously, they are developing virtualized versions of their software, and branding everything as Software Defined – a term which at this point carries the same weight as “new and improved” on paper towel packaging. So when I arrived at NetApp, I assumed that the presentation would be a conga line of “new and improved” storage products that were “Software Defined” and “Built for the Cloud” whilst all evidence spoke to the contrary. I was happily incorrect! The presentation was from Eiki Hrafnsson who was the CEO of GreenQloud, an Icelandic company purchased by NetApp last year. GreenQloud was focused on developing public cloud platforms, and NetApp purchased them in order to create a whole new solution. It leverages the ONTAP file system from NetApp, but that is where the similarities end. Eiki treated us to demos showing how their Cloud Volumes were able to outperform EBS volumes in AWS, and a full orchestrated container deployment using Kubernetes with Cloud Volumes providing native persistent container storage.

NetApp purchased an innovative company and left them alone to keep doing awesome things. If they can continue with that strategy and let it infect some of the more established portions of the organization, then I see a bright future for NetApp. Whether or not Eiki is still there in 12 months should be a solid indicator of whether NetApp’s old guard has allowed this project to thrive or crushed it with internal polictical machinations.

The third category is exemplified by another unexpected contender, Veritas. Yes, that Veritas. The presentation got off to a rocky start, with Veritas doing exactly what I had expected NetApp to do. They were talking about how their backup products could ship information up to the cloud, which is okay, I guess. Seriously, your backup product being able to use cloud storage is table stakes at best, along the same lines as being able to use VM snapshots. They even started bragging about their new backup appliances and how many petabytes it can hold. The delegates were getting restless, and finally Tim Crawford stopped the presenter and asked him to start over with some additional guidance. It was a tough love moment, but it paid off! Out of nowhere, Veritas brings out two new presenters that jump almost immediately into a demo of a cloud native backup solution that they developed from scratch. This solution, CloudPoint, was capable of backing up cloud workloads like Azure VMs, AWS RDS databases, and more. It was lightweight, and leveraged the built-in backup capabilities of cloud services, while also providing indexing, metadata management, and scheduling. The application ran in a container and had a lightweight UI. The product was clearly unpolished and rough around the edges, but it was a breath of fresh air to what had been a stifling atmosphere of superiority emanating from the Veritas folks. Be humble, stay humble, would be my advice.

What was particularly noteworthy about the Veritas offering is that the team responsible for developing it were pretty new to the organization. It felt as if someone at Veritas had hired the team, and told them to go off and make something cool with the cloud. And they did! I’ve heard of centers of excellence or centers of innovation at companies, and while no one said that’s what they were doing at Veritas, that’s exactly how it felt.

So there you go, three paths to cloud innovation. Be it, buy it, or add it. Just don’t let the haters squash your dreams.

Network Disaggregation Tsunamis

AT&T, a company that I generally unleash scorn upon for their cell phone service, has actually done something fairly interesting.  On Jan 29th they announced that they would be releasing their dNOS (distributed network operating system) to the  Linux Foundation.  Now before you roll your eyes and quote Jessie Frazelle, who you should be following on Twitter and not one of the garbage Kardashians, I am aware that sometimes orgs donate their project to the Linux Foundation and leave itto languish and die in the hot and unforgiving light of the desert sun.  But I don’t think dNOS falls under this particular category.  AT&T has not only developed dNOS internally, they have a working prototype of it on production hardware possibly in actual production.  I mean that’s the way the whitepaper reads.

So what is dNOS and why is AT&T so psyched about it?  The concept behind dNOS is the development of an open source operating system for network hardware, that can run on commodity gear, so called whiteboxes, though why’s it gotta be white? What about pink boxes, or taupe?  The reason AT&T is so jazzed about this idea is the rather high cost of the switches and routers they use to run their carrier grade networks.  These boxes are vertically integrated using custom hardware, custom software, and proprietary everything.  This is not only a large cost to AT&T, but it also slows their innovation cycle as they are at the mercy of the vendor when asking for new features.

I’ve mentioned network disaggregation before, going so far as to predict that we would see significant progress in 2017.  That may have been a little too aggressive, but there were a lot of key components leading up to this.  dNOS was announced in November of 2017.   The P4 open source programming language also started gaining momentum in 2017.  Barefoot Networks released their Tofino programmable ASIC, and Broadcom released their Tomahawk processor that is more than capable of handling the speeds and feeds of a carrier.  Now in 2018 we have the introduction of the Linux Foundation Networking Fund, the release of an open-source SDK for the Broadcom Tomahawk chipset, and this announcement of dNOS being given to the Linux Foundation.  Things may have gotten off to a slow start, but I feel confident that we are reaching critical mass.  And I’m not even going to get into the new open-source, reduced cost optics that Facebook is pushing.

Basically the world of networking is in for a major shakeup, and the tide of open source and disaggregation is going to spur some incredible innovation.  The major cloud players and the carriers will see the first fruits of their labor, but all that innovation is definitely going to trickle down to the Enterprise and SMB markets.  With the coming Tsunami of IoT devices that will be thirsty for bandwidth and advanced networking solutions, this renaissance of networking cannot come soon enough.

VMware on AWS – You’re doing it wrong

This is going to be a controversial post I am almost certain.  Basically, I am going to argue that the whole premise behind running VMware on AWS is fundamentally flawed and not a viable strategy for those who are currently running VMware or for VMware itself as a company.  Get your angry comments ready, here we go!

Continue reading “VMware on AWS – You’re doing it wrong”

Windows Hosts with Kubernetes – The Beginning

Well, it wasn’t even close.  As mentioned in my previous post, I am moving to a less hands on role, and I want to keep close to the technology.  The concept of running Windows container hosts in a Kubernetes cluster fascinates me and it appears that I wasn’t alone.  With 82% of the votes on my Twitter poll, it was the clear winner.  Now I guess I actually need to start diving in, and by diving in, I mean reading docs.

Continue reading “Windows Hosts with Kubernetes – The Beginning”