slide

Docker swarm on azure for docker deep dive

Ned Bellavance
6 min read

Cover

I’ve been working my way through the very excellent Docker Deep Dive book by Nigel Poulton. If you’ve been meaning to get into the whole Docker scene, this is the perfect book to get you started. Nigel doesn’t assume prior container knowledge, and he makes sure that the examples are cross platform and easily executed on your local laptop/desktop/whatever. That is, until you get to the section on Docker Swarm. Now instead of using a single Docker host, a la your local system, you now need six systems - three managers and three worker nodes. It’s entirely possible to spin those up on your local system - provided you have sufficient RAM, but I prefer to use the power of the cloud to get me there. See I might be working through the exercises on my laptop over lunch and then my desktop at night. I’d like to be able to access the cluster from whatever system I am working on, without deploying the cluster two or three times.

I decided to use Microsoft Azure and the Azure Cloud Shell to deploy the setup. I have an MSDN subscription with Azure credits, and these tiny VMs aren’t going to overrun my monthly allocation. I chose to use the Azure Cloud Shell so I could deploy the cluster using the Azure CLI, and interact with the cluster from any system that has a browser. If all that sounds pretty good to you, then’s here’s what you’ll need to do. First, you’ll need an Azure subscription, I’ll leave that as an exercise to the reader. Then you’ll need to login into the portal and launch the Azure Cloud Shell. That’s this icon

Azure Cloud Shell. That will open a window in the bottom half of the screen. If you’ve never used the Azure Cloud Shell before, then you will be prompted to select Bash or PowerShell and create a storage account to house your user home environment for the shell. The shell itself is a container spun up on demand, running either Bash (Linux) or PowerShell (Windows) and attached to the storage account to load your user directory. That storage account provides persistence across Cloud Shell sessions. The Cloud Shell is also already logged into Azure with the credentials you used to log into the portal, and it already has Azure PowerShell or the Azure CLI pre-installed. Lastly, it has the docker client bits installed including docker-machine, which is pretty critical to the next few items. First you’re going to want to select a Bash shell for this exercise. Then you need to select a subscription, if you have more than one.

az account list --query [].name

That will give you the names of all your subscriptions. If you only have one, it should be selected by default. If you have a few, then pick the one you want to work with by running the following.

az account set -s [Subscription Name or ID]

Now we’re ready to create some docker hosts! First we have to store the Subscription ID in a variable:

sub=$(az account show --query "id" -o tsv)

Then we create some Docker hosts by using the docker-machine utility:

docker-machine create -d azure /
     --azure-subscription-id $sub /
     --azure-ssh-user azureuser /
     --azure-open-port 80 /
     --azure-size "Standard_A1_v2" /
     --azure-availability-set mgr_aset
     --azure-location eastus /
     mgr1

In the above command we are asking docker-machine to create an Azure VM using the supplied subscription. The username for ssh will be azureuser, and docker-machine will automatically generate the necessary certificate key-pair. We’re opening port 80, which is not strictly necessary. The VM size will be A1 v2 because it’s cheap and uses Standard LRS storage. We’re placing the VM in an availability set, since that is just good practice, and placing the VM in the East US region. What about the resource group and networking you might ask? If you don’t give docker-machine a resource group, it will use docker-machine by default, either creating that resource group or using an existing one. If you don’t supply it with a vnet or subnet, it will default to the docker-machine-vnet and docker-machine subnet. Again, it will create those networks if they don’t exist, or use an existing one if they do. The source image is Canonical Ubuntu 16.04 by default. The provisioning will look something like this:

Running pre-create checks...
(mgr1) Completed machine pre-create checks.
Creating machine...
(mgr1) Querying existing resource group.  name="docker-machine"
(mgr1) Resource group "docker-machine" already exists.
(mgr1) Configuring availability set.  name="mgr_aset"
(mgr1) Configuring network security group.  name="mgr1-firewall" location="eastus"
(mgr1) Querying if virtual network already exists.  name="docker-machine-vnet" rg="docker-machine" location="eastus"
(mgr1) Virtual network already exists.  rg="docker-machine" location="eastus" name="docker-machine-vnet"
(mgr1) Configuring subnet.  name="docker-machine" vnet="docker-machine-vnet" cidr="192.168.0.0/16"
(mgr1) Creating public IP address.  name="mgr1-ip" static=false
(mgr1) Creating network interface.  name="mgr1-nic"
(mgr1) Using existing storage account.  name="vhds81zdtyaskjdhkad4l74bj" sku=Standard_LRS
(mgr1) Creating virtual machine.  name="mgr2" location="eastus" size="Standard_A1_v2" username="azureuser" osImage="canonical:UbuntuServer:16.04.0-LTS:latest"
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env mgr1

Now rinse and repeat the command for mgr2 and mgr3. For the worker nodes you can run the same command again except change the availability set to wrk_aset and the name to wrk1, wrk2, and wrk3. Once you’re finished, you can access the mgr1 node by running:

docker-machine ssh mgr1

Or you can run:

eval $(docker-machine env mgr1 --shell bash)

And that will configure the local docker client to use mgr1 for docker commands. From here, you can follow along with Nigel’s Docker Swarm chapter, or mess around with Docker Swarm however it suits you. If you want to stop the VMs, go ahead and run:

docker-machine stop wrk1  wrk2  wrk3
docker-machine stop mgr1  mgr2  mgr3
vms_ids=$(az vm list -g docker-machine --query "[].id" -o tsv)
az vm deallocate --ids $vms_ids

Docker-machine will stop the VMs, but not deallocate them, so you’re still getting charged. The last two lines grab all the VMs in the docker-machine resource group and deallocate them. If you’re done with this experiment and want to get rid of them, you can run the following:

docker-machine rm wrk1 wrk2 wrk3
docker-machine rm mgr1 mgr2 mgr3
az group delete --name docker-machine --yes

And you’re all cleaned up! Let me know if you find this helpful. And seriously, go buy Nigel’s book on Leanpub. And listen to a podcast I did with him if you’re feeling extra saucy.