The Azure Advent Calendar is a fantastic idea from Gregor Suttie and Richard Hooper. Everyday, starting on December 1st and going until Christmas, they are posting three new Azure videos on their YouTube channel. They were originally planning to post a single video each day, but the response was so overwhelming they were able to schedule 75 total videos! I guess they shouldn’t be surprised, this is the Microsoft MVP community we are talking about. When I asked for a few guests on my fledgling Day Two Cloud podcast, the response was similarly overwhelming.
My humble entry is being published today, December 3rd, and the topic is running the Pod Identity solution on Azure Kubernetes Service.
The code for the demo can be found on my GitHub in this repository. As I mentioned in the demo, there is a bug in the azure-identity library for Python that is preventing my Flask application from working properly. I’m submitting a bug report, and if it gets fixed I’ll post an update here.
I’m excited about all the great content that will be published in the run up to Christmas. You can see the complete calendar at their website. Thanks again to Gregor and Richard for inviting me to be a part of this event.
In a previous post, I performed a storage performance benchmark of Azure Managed Disks and Azure Files for Azure Kubernetes Service. The testing included the now generally available Ultra SSD class of Managed Disk. The process for using Ultra SSD with AKS was fraught with peril, caveats, and an assist from the AKS product group to get it all working. I thought I would detail how I went about enabling Ultra SSDs with AKS in case someone else was struggling with the same.
This is a follow-up post to my analysis of using Azure NetApp Files for AKS storage versus the native solutions. After I wrote the post, with some surprising findings about Azure File performance, a number of people from Microsoft reached out to bring up a few key facts. In this post I will review the points that they brought up and include an updated analysis of the native Azure storage solutions for the Azure Kubernetes Service. Hold on to yer butts everyone!
In April of 2018, I was delegate for Cloud Field Day 3. One of the presenters was NetApp, and they showed off a few different services they had under development in the cloud space. In a previous post I went over the services in some detail, so I won’t regurgitate all that now. One of the services that was still in private preview at the time was NetApp Files for Azure. The idea was relatively simple, NetApp would place their hardware in Azure datacenters and configure the hardware to support multi-tenancy and provisioning through the Azure Resource Manager. That solution is now generally available, and I was curious how it would perform in comparison with the other storage options for the Azure Kubernetes Service (AKS). In this post I will detail out my testing methodology, the performance results, and some thoughts on which storage makes the most sense for different workload types.
This is a follow-up to my post about the Kubernetes Cluster running on Azure Stack. In that post, I asked myself how to scale a deployed cluster and how to update the cluster. Since that post went live, I’ve done experimentation on my own, and also learned a few things about the deployment toolset being used for the Kubernetes Cluster Template.