First off, let me quell your anticipation. I got it working! It was not as straightforward as I might like, but it will work. If you haven’t already read post three, I would recommend doing so. The long and short of it is that the build task Azure Resource Group deployment in TFS doesn’t understand the Azure Stack environment. It doesn’t know how to talk to it, so any build task is going to fail. One of the engineers at Microsoft suggested I use a PowerShell task to deploy instead, which I did. That was not as simple as I would have liked, but here is what I had to do. Continue reading “CICD Pipeline with Azure Stack – Part 4”
If you’ve started playing with Azure Stack, you might notice that the Windows Server 2012R2 image is a little behind on its Windows patches. Before you do any heavy duty testing, you’re going to want to update the image with the latest patches. This is a multi-step process:
- Deploy an image to update
- Install all available Windows Updates (I’ve got a script for that!)
- Sysprep the machine to be a new image
- Locate the VHD file
- Update the image using the portal or PowerShell
I’m not going to walk you through deploying a VM in Azure Stack, but I will recommend that you use the A2 size. Installing the update should go a little faster on a system with more horsepower.
Using PowerShell to run Windows Update
You could manually update the VM with all the Windows Updates, but why do that when there’s PowerShell? I’m making use of the Windows Update PowerShell module available on the TechNet gallery. All you have to do is copy this script from my Gist to the target VM. Then run it. The script will download and import the module, install the available Windows Updates, and then create a scheduled task to run again on startup. It should keep running until there are no updates left. Fire away and come back in a few hours depending on your internet connection. It took an A2 VM about four hours to patch when I last ran this. Glad I didn’t have to babysit it!
Now that your VM is properly patched up, it needs to be prepared for use as an image. Fortunately, all the necessary settings and VM agent are already installed. From an administrative command prompt run sysprep:
The VM will shutdown when sysprep is complete. Make sure that you go into the portal and stop it from there, so it is deallocated properly.
The VHD location for the VM will vary depending on the storage account you used. From within the portal, go to the VM’s Disks
Select the OS disk and then copy the blob URI by clicking on the neat little clipboard icon. Paste that value into notepad or something similar.
Go into the storage account that was used to store the VHD. The blob properties of the VHD need to allow anonymous access. Select the Blob portion of the storage account, and then select the container which houses the vhd (usually vhds). Change the Access policy to Access type to Blob.
Now we’re going to add a new version of the Windows 2012R2 image. In the portal select Resource Providers
Select the Compute RP and then click on the VM Images on the far right
Click on the 2012-R2-Datacenter image and copy all the values to notepad
Now click on the Add button and use the previous values to fill out the fields. Be sure to increment the Version number in. In my case I went from 1.0.0 to 1.1.0.
Now click the Create button and wait. Once the creation process is complete, you will have a fully patched Windows Server 2012R2 image to use for your Azure Stack deployments. My creation time was about an hour, so don’t be surprised when it doesn’t create immediately.
You might wonder what happens with the existing Gallery Item that was using the version 1.0.0 template. Good question! The templates for the marketplace are unsurprisingly stored in a storage account in the System.Gallery Resource Group. If you dig down into the storage account you will find the blob container with the marketplace item here: dev20151001-microsoft-windowsazure-gallery/MicrosoftWindowsServer.WindowsServer-2012-R2-Datacenter.1.0.0
The template that controls deployment is called CreateUIDefinition.json. And that file doesn’t reference an actual version of the template. So despite the fact that the Gallery Item description claims that it uses the 1.0.0 version, it should use the latest version (1.1.0 in my case). I created a new VM to test, and as you can see, no Windows Updates were available.
PS – You can also add an image using PowerShell. If you’d like to know more about the process, then check here and use the same values you would have in the portal.
Here’s the full PowerShell script if you’re interested:
Everything is Broken…
There’s more than one occasion where I have uttered the phrase, “Why can’t this just work?” Usually after battling it out with some piece of software that the marketing fluff described as “simple” and “easy-to-use” and turns out to be more like incredibly complex and completely undocumented. I want my technology to just work, but I also want it to be cutting-edge, infinitely configurable, and fully documented. Those who are familiar with the Project Management Triangle may realize that having all three is impossible. To which I say, what about in n-th dimensions?
Seriously though, I have noticed that with the speed of innovation, especially in the cloud, most things that are released are at least partly broken. And that’s not just for beta or preview features, generally available features and functionality are buggy and partly undocumented. Major releases of software have always had some bugs, which is why “It ain’t done till SP1” was a mantra among the Microsoft cognoscenti.
There are two deployment models in Azure, the older being the Service Management model, aka classic mode. The newer model is Azure Resource Manager (ARM). For reasons that extend beyond this post, Microsoft is moving away from the classic mode and adopting ARM wherever possible. Up until a few months ago, the two models had not yet reached feature parity and so classic was still required for some deployments. At this point the two models are at feature parity, and in fact ARM has pulled ahead. That gap is only going to widen as Microsoft continues to pour investment into ARM and leave classic to die on the vine.
If you are looking into migrating your Azure classic virtual machines to ARM, you might be wondering what your options are. There are several potential solutions, Microsoft supported and otherwise. Each has a set of limitations and gotchas, and in this post I intend to review them and provide a guide for using Azure Site Recovery to get the job done. Continue reading “Options for Azure Migrations”