slide

Cloud field day – kemp

Ned Bellavance
4 min read

Cover

I will be a delegate for Cloud Field Day 5 on April 10-12. During the event we will be attending presentations from several vendors, which will be livestreamed. Before I leave on this grand adventure, I wanted to familiarize myself with each of the presenters and consider how their product/solution integrates with cloud computing. I’m also interested to hear from you about what questions you might have for each vendor, or topics you’d like me to bring up. As a delegate, I am meant to represent the larger IT community, so I want to know what you think! In this post I am going to consider Kemp and what a load balancer company can do in the cloud better than the native tooling.

The first time I ever encountered Kemp was at a client site. They were using the virtual LoadMaster to load balance their Exchange 2010 environment. If you are unlucky enough to have lived through Exchange 2010 load balancing, then you know that it wasn’t exactly an optimal experience. You needed to enable sticky sessions, and add a whole host of different listeners. The setup could get rather complex. But I digress, the point is that they were using Kemp and I had to jump into using it as well to update their configuration to support Exchange 2013. Kemp’s UI was simple. They had templates for Exchange 2013 that worked almost out of the box. And I was told that the virtual appliance was very affordable. In my mind, I put Kemp in the category of low cost, simple to use, and probably not enterprise grade. Maybe that’s unfair, but it was my first impression.

The next time I saw Kemp was in the context of Azure Stack. Around the introduction of Azure Stack in general availability - I can’t remember if it was at GA, or just after - Microsoft added the Azure Stack Syndicated Marketplace. 3rd party vendors could make their Azure Marketplace solutions available for Azure Stack, and you could download the marketplace items and make them available to tenants of your Azure Stack deployment. One of the first vendors I saw on the list was Kemp! That was surprising. I had assumed I would find Citrix’s NetScaler or F5’s Big-IP. F5 is there now, Citrix is notably absent. But at launch, Kemp was the only option, which I found impressive.

Kemp obviously has an investment in cloud, even hybrid cloud. A quick look at their solutions areas lists the following:

  • Load balancing (LoadMaster)
  • Multi-cloud (Kemp 360 Central)
  • App optimization (LoadMaster)
  • Security (Kemp 360 Vision)

Basically they have their flagship product, the LoadMaster. And the LoadMaster has a bunch of additional features that help it support multiple clouds, application optimization, and security. This is common across all the major load balancers, who now are mostly calling themselves application delivery controllers. Load balancer just isn’t fancy enough.

Since they are embracing the cloud heavily, I have some questions about the next generation of applications and how they are handling it.

  • Supporting cloud native applications - A lot of the documentation is looking like it is aimed at traditional IaaS. Does the LoadMaster handle cloud native contructs like a Azure VM Scale Sets or AWS AutoScaling Group?
  • Acting as an API gateway - Can the LoadMaster also act as an API gateway, providing protection, control, and throttling for external facing APIs?
  • Integration with Kubernetes - Can the LoadMaster automatically front-end a service from Kubernetes?
  • Container based deployment - Can the LoadMaster be deployed in a container? Could it work as a side car?
  • Feature richness over native load balancers - Why would someone choose the LoadMaster over the AWS Application LB or Azure Standard LB?
  • Metrics collection - Do the metrics, logging, and telemetry stream to native cloud services like AWS CloudWatch or Azure Monitor?

Those are the questions I have for Kemp right now. I am sure they will be plenty more as they take us on a journey down their cloud roadmap.

Do you have questions for Kemp? LMK and I’ll be happy to ask them too.