slide

Trends for 2017

Ned Bellavance
8 min read

Cover

The end of 2016 is here, and I think many of us are breathing a sigh of relief.  The year has not been kind to some, and has been described as a “dumpster fire” by others.  On the whole, I actually think that 2016 was a pretty decent year, or at least no worse than most previous years.  But I am a bit biased since my second daughter was born in June, and she is awesome!  That’ll tip the scales regardless of what else happened.  Anyhow, I digress.  The tech industry has seen a lot of change, with new technologies emerging and companies innovating at a rapid pace.  I’d like to use this post to take a look at a few of those trends, and which ones I will be keeping an eye on in the coming year.

Machine Learning and Artificial Intelligence

In 2016 there were a lot of exciting developments in the Machine Learning and Artificial Intelligence spaces.  The big ones were the announcements from public cloud vendors Azure and AWS, who have released their ML and AI APIs.  Azure now has the Cortana Cognitive Services suite, which includes APIs for Speech, Images, Search, and Knowledge.  AWS has released Lex, Polly, Rekognition and their Machine Learning API.  Both public cloud vendors have also added VMs to their IaaS offering that include an FPGA for those who need the hardware acceleration for high performance computing.  In the past AI and ML were only available to the largest companies and universities who could afford the hardware and software required to run it.  Innovation was slow because the barrier to entry was high.  Now that barrier has tumbled down to fractions of a cent on the dollar, anyone can dive into the world of ML and AI without necessarily knowing how to code or run the hardware.

Given the rate at which data is being produced, there is no way for a human being to comb through it and glean relevant insight.  At least not in a reasonable amount of time.  AI and ML will be instrumental in consuming and processing the mountains of data and helping mere humans make sense of it all.  I fully expect the market for AI and ML to explode in 2017, and if I were a young programmer I would focus my efforts learning those public cloud APIs and applying them to massive datasets.

Hybrid Cloud and Hyperconverged Infrastructure

There was a mythical time when all enterprises were going to abandon their datacenters and migrate everything to the public cloud.  There was no doubt in some analysts minds that the datacenter was dead, and public cloud was the only path to a bright future.  And then reality intervened.  Public cloud is still very much part of the modern enterprise, and some organizations have even taken a cloud-first approach.  That doesn’t mean there isn’t still a need for on premise datacenters and applications.  There are numerous reasons for this requirement, here are just a few:

  1. Regulatory compliance
  2. Security concerns
  3. Legacy applications (especially mainframe)
  4. Data sovereignty
  5. Proximity to endpoint

And thus the hybrid cloud was born.  The big trend in 2016 was how to build a hybrid cloud, and while some big players like VMware have taken a crack at it, no one has a solid solution that ticks all the boxes.  The public cloud has two amazing things that just don’t exist on premise, automation and orchestration.  When an EC2 instance is launched in AWS, there is a lot happening in the background to get that process completed.  But you don’t care about that and you don’t really need to worry about it.  AWS takes care of the whole process for you by automating the steps and orchestrating the automation together using their control plane.  That sort of automation barely exists on the private cloud side.  Sure you could run vCloud Director or Microsoft CPS, but both of those are difficult to set up and maintain.  Enterprises want a turnkey solution that will give them the convenience of public cloud in their on premise datacenter.  Such a solution will need to run on net new hardware, so there needs to be a scalable hardware construct that can be expanded on without a lot of work by internal IT.  That is where Hyperconverged Infrastructure comes in.  Having the networking, compute, and storage packed into a single node allows for that kind of simple expansion and delivery.

All of the Hyperconverged vendors are trying to implement the automation and orchestration pieces with varying degrees of success.  Companies, like Nutanix, that are heavily focused on software are likely to lead the charge for a better hybrid cloud experience.  Or it could be a third-party application like Terraform or Kubernetes that provides the overlay for hybrid cloud.  Finally, there is Azure Stack from Microsoft, which takes the code from Azure and shrinks it down for the datacenter.  This is still an emerging field, and I think we’ll see some interesting competition in 2017.  I also forsee several acquisitions as the HCI market starts to mature and consolidate.

Disaggregation of the Networking Stack

Networking is about 20 years behind the it’s storage and compute brethren.  There was a time when you purchased your compute and storage from a single vendor and they provided all components of the stack.  The mainframe included the hardware, firmware, operating system and applications all from a single vendor.  You couldn’t run one vendor’s application on another’s hardware.  You were locked in.  Then the idea of general purpose CPUs came along, and a general purpose operating system.  It freed the consumer to purchase hardware from one company (or multiple), the operating system from another, and the applications from other third parties.  If you look at what happened after the introduction of Linux and Windows server operating systems, there was an explosion of innovation from hardware and application vendors.  Storage has followed a similar trajectory, albeit a decade behind the compute.  Lots of people still buy their big storage frames from a single vendor and use their bundled software stack.  But you don’t need to do that.  Products like Ceph and Storage Spaces Direct remove the need to purchase a frame and the software from a single vendor.  Just like the world of compute, you can choose the hardware, operating system, and applications.  Networking has yet to embrace the idea of disaggregation, though the rise of SDN has pushed the issue somewhat.  Today you are still mostly buying your networking gear with the OS baked in, and limited support for third party applications.  That is changing though, with vendors like Cumulus Networks, Big Switch Networks, and even Dell’s OS10, the customer can choose the hardware and operating system independently.  Even the lower layers of the networking gear can be manipulated using FPGA chips or the coming products from Barefoot networks.

In addition to the trend of disaggregation, the mega-scale operators like AWS, Facebook, and Microsoft have all started to design their own hardware and software to meet the needs that traditional networking vendors have not.  Facebook has created FBoss to run their switching environment, and their own switching gear with the first product being their ToR switch Wedge.  In a similar vein, Arista networks has been quickly replacing the incumbents in the data center space by offering a switch with less features and a much lower price point.  Their EOS (extensible operating system) takes a modular approach to network operating systems, that makes it both more reliable and extensible to third parties.

Essentially, networking has been fairly stagnant in terms of innovation over the last 20 years.  Get ready for that innovation to rev up in 2017 as disaggregated networking blends with SDN to create an explosion of possibilities.

Everything to the Edge

For too long the focus of IT was concentrating everything in data centers, and forcing the client to come and get the information.  That worked fairly well when most clients were located a few floors or across a decent WAN link from the data center holding the information, but that’s no longer the case.  Clients are now operating in a mobile fashion from laptops, mobile devices, and across cellular data networks.  They expect apps to be fast and responsive, and can’t wait for the signal to travel up to the cell tower, down to the NOC, across the ocean to a data center and back.  By the time a page has loaded, the user has already moved on.  Data access needs to be almost instantaneous, and content needs to live as close to the edge as possible.  At the same time technologies like HTTP2 over UDP are designed specifically to speed up page loads and cache content closer to the edge.  AWS has introduce Lamba@Edge to make dynamic web applications function at a CDN endpoint instead of going back to the data center.  Anything that can be done to reduce roundtrips is a benefit to the consumer.

It’s not just content consumption though.  It’s also data generation happening at the edge.  IoT devices are generating heaps of data that needs to be collected and analyzed at the edge, especially for real-time applications.  Products like Azure’s IoT gateway and AWS GreenGrass are intended to try and deal with the glut of data being created by IoT devices.  I would expect that innovations in this area will keep accelerating, working in tandem with the hybrid cloud approach to put things in the public cloud when you can and keep them local when you must.  So called “edge” cases if you will.

Those are the trends I will be tracking through 2017.  Of course, who knows what the new year will bring.  I can’t wait to review this list in a year and see what I got right and missed.