Docker Swarm on Azure for Docker Deep Dive

I’ve been working my way through the very excellent Docker Deep Dive book by Nigel Poulton. If you’ve been meaning to get into the whole Docker scene, this is the perfect book to get you started. Nigel doesn’t assume prior container knowledge, and he makes sure that the examples are cross platform and easily executed on your local laptop/desktop/whatever. That is, until you get to the section on Docker Swarm. Now instead of using a single Docker host, a la your local system, you now need six systems – three managers and three worker nodes. It’s entirely possible to spin those up on your local system – provided you have sufficient RAM, but I prefer to use the power of the cloud to get me there. See I might be working through the exercises on my laptop over lunch and then my desktop at night. I’d like to be able to access the cluster from whatever system I am working on, without deploying the cluster two or three times.

Continue reading “Docker Swarm on Azure for Docker Deep Dive”

Tortoise and Hare, “Software will devour us all!”

Is Technology Moving Too Fast? Yes.  Will It Slow Down?  No.

As the spectre and meltdown drama continues to play itself out in the increasingly disdainful  and disinterested public eye, a few things have come to my notice.  The first is a post from The Math Citadel taking Intel and the IT field at large to task for having a fundamental design flaw in processor that is 20 years old.  They posit that the ideas of “fail fast and fail often, the perfect is the enemy of the good, and just get something delivered” should be summarily rejected.

They rightly point out the following:

“Rushed thinking and a desperation to be seen as ‘first done’ with the most hype has led to complexity born of brute force solutions, with patches to fix holes discovered after release. When those patches inevitably break something else, more patches are applied to fix the first patches.”

They claim that in our rush to minimally viable product, and adopt Agile software development practices, we have set ourselves on a path of continual and ever increasing failure.  Being mathematicians, their prescription is of course that IT needs to think more like them.  Because as we all now know, mathematicians are rigorous and correct all of the time.   Yes I linked a Cracked article, no I don’t see why that should matter.

The writer of the post clearly has an agenda, and a lens of perception coloring how they view the world.  If I asked a baker to tell me how to fix what is wrong with IT, they might make allusions to careful measurement and have a recipe that is followed vigorously.  It’s difficult to escape the trappings of your own occupation.  But that doesn’t make the mathematicians wrong, just insufferably smug.

The other article is from Danny Crichton care of TechCrunch, and he points out a similar trend with an alarming number of examples of technology going wrong.  I don’t really need to consult that since in the last week alone, my laptop has Green Screened on reboot, my phone had to be factory reset for “reasons”, Hulu live streaming was down for 3 hours during Friday primetime, and Skype’s login has been dodgier than usual for the last five days.  Everything is broken, or in various states of broken, ever since Google started their forever Beta program. *Amen*

Crichton cites some interesting research about the need to maintain existing software rather than just speeding ahead to the next thing.  He also points out that the massively complex systems we have put together are beyond our own ken, and small changes or failures can have a massive and totally unpredictable impact.  Apparently Charles Perrow, a professor at Yale, calls these normal accidents.  I prefer the more upbeat and Bob Ross inspired happy accidents. 

What are we to make steaming pile of technology that has been pooped out over the last ten years?  Do we hold our nose and smile?  Crichton brings up the fact that it is possible to make a highly available, and resilient complex system, such as modern US aviation.  That makes sense when people’s lives are literally on the line, and not so much when the stakes are whether or not I can view  pictures of tacocat whenever I want on my phone.  What is the motivation for a company to create product that is incredibly stable, at triple the cost, for none of the return?  Consumers have become completely numb to the constant broken state of their technology.  The only time it becomes patently obvious is when you try to teach a loved one how to use some new piece of gadgetry, only to discover how broken and non-intuitive the product is.  Apple’s no bastion of light these days, but I will hand it to the Apple of five years ago.  They made a rock solid product, and ruled the ecosystem with an iron fist, much to the appreciation of their stockholders.  That level of quality is, erm, lacking somewhat these days.

I don’t know of a simple way out it, but I think it will probably end up being AI.  Our systems have become so ludicrously complex, and unfathomably labyrinthine that mere mortals stand a snowball’s chance in hell of taming the beast.  I suspect that the only way that our foundations get shored up is if a neutral third-party with infinite patience, computational power, and memory is able to start fixing things for us.  The proverbial mother cleaning up the children’s’ mess so that we may make another mess tomorrow.

Or we’ll just accidentally create our own oblivion. I feel like the odds are pretty even.

AzureStack on Azure – Part 1

With the introduction of Dv3 and Ev3 VMs in Microsoft Azure, it became possible to run nested virtualization on Azure. Since I’ve got Azure Stack on the brain these days, my immediate thought was, “I wonder if I can run Azure Stack on Azure?” (cue Inception music). Not only was the answer yes, but others had already started the process for me. Following in the footsteps of Daniel Neumann and Florent Appointaire, I was able to bet the process running. One of the engineers at Microsoft took some of that work, added their special sauce and rolled out a GitHub repo that helps you through the process. I have forked that repo, and started adding some automation myself.

Continue reading “AzureStack on Azure – Part 1”

Network Disaggregation Tsunamis

AT&T, a company that I generally unleash scorn upon for their cell phone service, has actually done something fairly interesting.  On Jan 29th they announced that they would be releasing their dNOS (distributed network operating system) to the  Linux Foundation.  Now before you roll your eyes and quote Jessie Frazelle, who you should be following on Twitter and not one of the garbage Kardashians, I am aware that sometimes orgs donate their project to the Linux Foundation and leave itto languish and die in the hot and unforgiving light of the desert sun.  But I don’t think dNOS falls under this particular category.  AT&T has not only developed dNOS internally, they have a working prototype of it on production hardware possibly in actual production.  I mean that’s the way the whitepaper reads.

So what is dNOS and why is AT&T so psyched about it?  The concept behind dNOS is the development of an open source operating system for network hardware, that can run on commodity gear, so called whiteboxes, though why’s it gotta be white? What about pink boxes, or taupe?  The reason AT&T is so jazzed about this idea is the rather high cost of the switches and routers they use to run their carrier grade networks.  These boxes are vertically integrated using custom hardware, custom software, and proprietary everything.  This is not only a large cost to AT&T, but it also slows their innovation cycle as they are at the mercy of the vendor when asking for new features.

I’ve mentioned network disaggregation before, going so far as to predict that we would see significant progress in 2017.  That may have been a little too aggressive, but there were a lot of key components leading up to this.  dNOS was announced in November of 2017.   The P4 open source programming language also started gaining momentum in 2017.  Barefoot Networks released their Tofino programmable ASIC, and Broadcom released their Tomahawk processor that is more than capable of handling the speeds and feeds of a carrier.  Now in 2018 we have the introduction of the Linux Foundation Networking Fund, the release of an open-source SDK for the Broadcom Tomahawk chipset, and this announcement of dNOS being given to the Linux Foundation.  Things may have gotten off to a slow start, but I feel confident that we are reaching critical mass.  And I’m not even going to get into the new open-source, reduced cost optics that Facebook is pushing.

Basically the world of networking is in for a major shakeup, and the tide of open source and disaggregation is going to spur some incredible innovation.  The major cloud players and the carriers will see the first fruits of their labor, but all that innovation is definitely going to trickle down to the Enterprise and SMB markets.  With the coming Tsunami of IoT devices that will be thirsty for bandwidth and advanced networking solutions, this renaissance of networking cannot come soon enough.

VMware on Azure – You’re still doing it wrong

Sigh.  There’s an old adage that I always come back to.  Just because you can do something, doesn’t mean that you should.  In this case I am thinking about the recent announcement by Microsoft that Azure would be supporting bare metal deployments of VMware on Azure hardware.  In case you’ve been living under a rock, AWS went GA with a very similar offering back in late August.  Of course there are some specifics that differ, but the overall theme is the same.  You can run your VMware workloads in their public cloud on bare metal, but still have close proximity to their respective public cloud services.  Alas, just because it’s on Azure now, doesn’t make the idea any better, and I stand by my previous post.

Continue reading “VMware on Azure – You’re still doing it wrong”