slide

Scaling the kubernetes cluster in azure stack

Ned Bellavance
3 min read

Cover

This is a follow-up to my post about the Kubernetes Cluster running on Azure Stack. In that post, I asked myself how to scale a deployed cluster and how to update the cluster. Since that post went live, I’ve done experimentation on my own, and also learned a few things about the deployment toolset being used for the Kubernetes Cluster Template.

If you remember from the previous post, the deployment process creates a virtual machine and clones the azsmaster branch of the Azure Cluster Service engine (acs-engine) from GitHub. The repo is a fork of the main Azure/acs-engine repo. But as Kenny Lowe helpfully pointed out to me, the acs-engine is being deprecated in favor of the aks-engine.

Which is also clearly echoed on the GitHub repo’s readme.md:

If you read through the notes on the repo, it becomes readily apparent that the code for the Kubernetes component of the acs-engine was moved over as is, and the rest of the codebase has been left to languish. The acs-engine that the Kubernetes Cluster on Azure Stack uses is from a deprecated source repo. One could easily infer from that information, that the next version of the K8s Cluster template will be using the aks-engine instead. Sure enough, if you look at the forks for the Azure/aks-engine repo, you’ll find there is a fork made by msazurestackworkloads.

Before I learned about all this, I did some reading about the commands that are available in acs-engine and determined that there was a scale command. It seemed to me that you could log into the dvm virtual machine that is created from the K8s Cluster template and manually run the commands. The scale command needs the following information:

  • Subscription ID of the existing cluster
  • Resource Group of the existing cluster
  • Name of the node pool
  • Type of Azure environment (Azure Stack in this case)
  • Authentication method type and credentials
  • Deployment directory from original cluster creation
  • FQDN of the master node(s)
  • New number of worker nodes

All of the necessary information is stored in the files from the original deployment, you just need to pull it from the files: apimodel.json, azuredeploy.parameters.json, and acsengine-kubernetes-dvm.log.

Here’s the full script:

Unfortunately, it turns out that the scale and upgrade commands are not supported on Azure Stack yet. You can see more about the error here on the closed issue for the GitHub repo. Since the repo is being deprecated in favor of the aks-engine, the issue was closed with a note that they are working on getting these commands supported with the aks-engine.

For the time being, the answer to scaling and updating the Kubernetes Cluster on Azure Stack is that you can’t. At least not with the toolset used to deploy it. You could go through the process of adding more nodes manually. You might even be able to clone one of the existing nodes and run some config scripts to add it as a new node.

My next project is to try out the aks-engine fork on Azure Stack and see if it is deploying properly. I’ll let everyone know how that goes in a future post.