Cloud Field Day 6 Prep – Hammerspace and Solo.io

This is part of a series of posts I’m writing as I prepare to attend Cloud Field Day 6. There are a total of eight presenters planned for CFD6, and I am going to cover two vendors per post. My goal is to have a basic understanding of each vendor’s product portfolio with a focus on cloud related products. Some of these vendors I am already familiar with, and others are new to me. In this post we are going to look Hammerspace and Solo.io.

Hammerspace

Hammerspace is not a company I had heard of before CFD6. Fortunately, they held a pre-event briefing for CFD delegates to give an overview of the company. Giving the overview ahead of time will give them more time to get down into the weeds, instead of spending time giving us a bunch of company background. Time is limited, and while that stuff is interesting, it doesn’t necessarily enhance the presentation.

One thing they didn’t address was the name of the company, which I thought they just grabbed out of thin air. Turns out I was kind of right. There is this concept in cartoons called Hammerspace, and it refers to a “fan-envisioned extradimensional, instantly accessible storage area in fiction” where characters can find whatever items they need for a given situation. In early cartoons, a character might pull a hammer or mallet out of thin air to whack another character, hence the name. How does this line up with the company Hammerspace? Good question!

Hammerspace is a storage company that specializes in provide a data-as-a-service middle tier to your existing storage backends. The DaaS component creates global namespaces for your storage that are accessible from anywhere. The naming therefore is quite apropos. In many ways I would compare what Hammerspace is doing to Lucidlink, another company that is presenting at CFD6. Both of these companies are trying to find a way to make the consumption of storage simpler and ubiquitous across multiple datacenters and clouds. Their approach differs in several ways.

The Hammerspace solution is composed of a metadata service and a data connection service, respectively called Anvil and DSX. Neither of these components actually hosts the storage, instead they function as an abstraction layer for various pools of storage you may have. That includes existing storage in your datacenter and cloud based services. They didn’t get very specific on what your backend needs to look like to hook into their solution, so I don’t know if they are looking for raw pools of disk, NFS shares, SMB shares, or something else. Regardless, the idea is that you don’t have to buy a whole bunch of new storage, you can migrate your existing storage over to this service.

Anvil, the metadata service and global namespace provider, keeps track of all the metadata and can be deployed in multiple datacenters with bi-directional replication. When it comes to storage metadata is king, providing front-systems with a list of files, status, and location. This allows Hammerspace to point an application at the closest copy of a file or object, and decouples the replication requirements of data from metadata. It’s an approach that is becoming more common as storage systems are re-imagined.

Even though storage systems are evolving to glom onto new concepts and structures, most applications are still reliant on traditional file system protocols like SMB and NFS. The DSX component of the Hammerspace product provides those services to Windows and Linux servers alike, although operating systems that support pNFS can connect directly to the backend storage. There is also a Kubernetes CSI for applications running in a K8s cluster. Object based storage services are on the roadmap, but will not be available until later this year.

Cloud Field Day Presentation

The way the product presentation is written, it appears that they are primarily going after storage presentation to the application layer. Unlike Lucidlink, which has an agent that can run on workstations and mobile devices, Hammerspace still uses a traditional network file system approach for delivering the data. I’d say the main strength here is their universal global namespace and integration with existing storage. During their CFD presentation, I would like to focus on the use cases for the technology and why I would choose to use their solution over something like Azure Files or Amazon S3, both of which have a global namespace concept.


Solo.io

I had seen some signage about Solo.io prior to CFD6 and I knew they had something to do with Kubernetes, but I wasn’t sure exactly what. Their website makes it pretty easy to understand where they sit in the K8s stack, they are all about the network including being an API gateway and a service mesh manager. They have a bunch of open-source projects that provide different types of functionality, and two paid offerings called Gloo and Service Mesh Hub.

Gloo

Gloo is an open-source project from Solo.io that provides an Envoy based API gateway for Kubernetes. There’s the free version that contains a lot of bells and whistles, but the paid-for, enterprise version adds some more goodness on top. Most of those additional features are around security functions like authentication, WAF, and policy. However, the free version supports Let’s Encrypt and HashiCorp Vault, which is pretty cool. I also noticed that the project supports Kubernetes, Consul, and serverless functions, which really covers the gamut when it comes to networking for microservice architectures and cloud-native applications. It’s really cool that the solution is not Kubernetes only, since we are not yet in a K8s-only world and probably won’t be any time in the near future.

I would like to take the tech for a test drive against a few different managed K8s solutions – AKS, EKS, GKE – and see how it stacks up to other API gateway solutions. I’d also really like to see some authentication mechanisms available on the free tier, or an additional tier that is not enterprise level pricing. HashiCorp just did something similar for Terraform, and I think it is going to generate a lot of sales for HashiCorp from smaller teams that couldn’t afford the price tag on Terraform Enterprise.

Service Mesh Hub

Service Mesh Hub appears to be an aggregation point for multiple service meshes that can be managed through a central console. I’m not sure how many organizations have reached the required scale to need such a management solution, though I am sure they are out there. But for the majority off SMBs the need for even a single service mesh solution, let alone multiple solutions that need a management overlay is incredibly small. I expect that might change. The degree of the change will depend on whether multiple service mesh solutions are necessary. If I may be permitted an analogy that might resonate with IT Ops folks.

A cluster of ESXi servers without vCenter would be akin to a cluster of linked services without a service mesh. With a small number of ESXi servers, you don’t really NEED vCenter. It’s nice to have, but it also introduces additional administrative overhead and licensing costs. Adding in vCenter server is a bit like adding a service mesh. Now you can manage all your ESXi servers through a centralized console and perform orchestrated operations that weren’t possible before. A service mesh hub would be a bit like adding vCloud Director to the mix. If you have to manage a fleet of vCenter servers, and provide multitenancy, and add additional visibility, then you might need vCloud Director. But you need a LOT of vCenters and a lot of clusters to require this level of management, and it also comes with a hefty price tag and a bunch of administrative overhead. Likewise, I have to imagine a Service Mesh Hub solution would also require a proliferation of services and service meshes that most people don’t have to necessitate standing up the solution and administering it.

The SMH complies with the Service Mesh Interface specification, so I imagine it can work with any service mesh technology that is also SMI-compliant. For those that are not SMI-compliant today, I suspect that their days are numbered. As more service mesh solutions throw their weight behind SMI it is going to become a requirement and not a nice-to-have.

Cloud Field Day Presentation

At their CFD6 presentation I would like Solo.io to concentrate on the value-proposition of their enterprise Gloo solution and whether they have plans to introduce a lower, paid-for tier of the product for small teams. I’d also like to know how they integrate with other solutions out in the market like Sysdig. And I would like to know if they have plans to open-source the Service Mesh Hub. It didn’t seem like that was available on the website.


Cloud Field Day 6 is happening September 25-27. There will be live-streamed presentations from both of these vendors. If you’d like to join in with the conversation just use the #CFD6 hashtag on Twitter. All of the delegates will be watching that tag and asking questions on your behalf!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.