CICD Pipeline with Azure Stack – Part 2

By Published On: January 22, 2017Categories: Tech TutorialsTags: , , , ,

This is part 2 of an ongoing series on building a CICD pipeline in Azure Stack.  You can find part 1 here, part 3 here, and part 4 here.

When I last left things, I had successfully installed TFS on a virtual machine in Azure.  And I wrote the template in such a way that it could be deployed to Azure Stack as well.  After completing that process, I started working through deploying an ARM template through TFS using an automated build process.  It turns out that the server running the build agent needs to have Visual Studio installed in order to deploy resources to Azure.  I have since updated my ARM template and PowerShell script to automate the installation of Visual Studio Community 2015 and the TFS build agent.  I also updated the template to take two new parameters: FileContainerURL and FileContainerSASToken.  The former points to the blob container that holds the necessary installation files.  The latter passes a SAS Token for read and list access to the blob container. 

If you want to play along you will need to create a SAS Token thusly:



Select-AzureSubscription -SubscriptionName "YourSubscriptionName"

$ctx = New-AzureStorageContext -StorageAccountName yourstorageaccountname -StorageAccountKey "YourStorageAccountKey"

New-AzureStorageContainerSASToken -Name "ContainerName" -Permission rl -FullUri -Context $ctx -ExpryTime (get-date).AddDays(30)

That will return the FullURI for the token.

Here’s an example output:

There are a number of fields in the URI, which you can read about in more detail here.  The se field defines the expiration date and time, which in the case of the above token has already expired.  The Full URI returned by the command can be used to list the contents of the container.  If you want to grab a specific item from the container, then you insert the file name into the path and include the token afterwards like this:

I am grabbing the AdminDeployment.xml file from the tfs container using the SAS token.  The SAS token has special characters in it, so when it is passed to the PowerShell script in the ARM template, double quotes must be added like this:

"commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ', variables('InstallTFSScriptScriptFolder'), '/', variables('InstallTFSScriptScriptFileName'),' -FileContainerURL ', parameters('FileContainerURL'),' -FileContainerSASToken \"', parameters('FileContainerSASToken'),'\"')]

I used \” to escape the double-quote in the concatenated string.

I am currently using the Visual Studio Community Edition DVD to install VSC 2015, which weighs in at a heavy 7GB.  Pulling that down to install VSC is time consuming and inefficient.  I’d like to come back and revise the install process to use the Web Platform Installer, which has a command line component.  That way I can install the light-weight WPI, and stream down the components of VSC that I need.  If this becomes a presentation, or something others are truly interested in, then I will make that a priority.  In the meantime I consider it a nice to have, but I’d like to actually move on to deploying IaC rather than tweaking the deployment platform.  Speaking of which…

Since the universe laughs at our grand designs, it was inevitable that my Azure Stack box lab would be broken when I needed it for the next phase of the project.  When I logged in, only the MAS-DC01 VM was running.  I soon discovered that the Cluster Shared Volume had gone offline in Failover Cluster Manager due to failed writes to the disk.  Since there is no redundancy in the Azure Stack POC (it’s a RAID 0 striped set), one drive having issues is all it takes to bring down the pool.  In a real life deployment, you would have multiple nodes using Storage Spaces Direct (S2D) to replicate the volumes locally and across nodes.

Of course my hardware wasn’t reporting any issues with the drive, so I thought it might have been a fluke.  I tried reinstalling the Azure Stack POC on the same hardware.  I got as far as the deployment of the MAS-BGPNAT01 box and it failed.  Looking in the Failover Cluster Manager, I could see that the CSV was offline again.  The drives all looked okay individually, but the cache battery was dead on the RAID controller.  I moved the drives to another lab server and tried deploying again.  Sadly the same result.  I can only conclude that one of the drives is bad, but not reporting as such.

In the meantime, I started installing the Azure Stack POC on the three nodes that make up the HPE 250-HC cluster.  Each node has 256GB RAM, 2x 12-core processors, 4x 1.2TB HDDs, and 2x 400GB SSDs.  More than enough horsepower, and I figured at least one of them would install properly.  They all ended up installing, and now I have three Azure Stack boxes to share with the crew at work.  As I mentioned in a post:


So that’s where I am at now.  Azure Stack is working again.  My template is ready to deploy.  And I have tested the build and deploy process using Azure.  Now it’s time to deploy in Azure Stack and see what happens.  In my next post I will show how to set up a project in Visual Studio using the deployed TFS server and how to create an automated build process to deploy the project to Azure Stack.  At least, that’s what I plan to do.  I can hear the universe chuckling as I type.


Related Posts