Tortoise and Hare, “Software will devour us all!”

Is Technology Moving Too Fast? Yes.  Will It Slow Down?  No.

As the spectre and meltdown drama continues to play itself out in the increasingly disdainful  and disinterested public eye, a few things have come to my notice.  The first is a post from The Math Citadel taking Intel and the IT field at large to task for having a fundamental design flaw in processor that is 20 years old.  They posit that the ideas of “fail fast and fail often, the perfect is the enemy of the good, and just get something delivered” should be summarily rejected.

They rightly point out the following:

“Rushed thinking and a desperation to be seen as ‘first done’ with the most hype has led to complexity born of brute force solutions, with patches to fix holes discovered after release. When those patches inevitably break something else, more patches are applied to fix the first patches.”

They claim that in our rush to minimally viable product, and adopt Agile software development practices, we have set ourselves on a path of continual and ever increasing failure.  Being mathematicians, their prescription is of course that IT needs to think more like them.  Because as we all now know, mathematicians are rigorous and correct all of the time.   Yes I linked a Cracked article, no I don’t see why that should matter.

The writer of the post clearly has an agenda, and a lens of perception coloring how they view the world.  If I asked a baker to tell me how to fix what is wrong with IT, they might make allusions to careful measurement and have a recipe that is followed vigorously.  It’s difficult to escape the trappings of your own occupation.  But that doesn’t make the mathematicians wrong, just insufferably smug.

The other article is from Danny Crichton care of TechCrunch, and he points out a similar trend with an alarming number of examples of technology going wrong.  I don’t really need to consult that since in the last week alone, my laptop has Green Screened on reboot, my phone had to be factory reset for “reasons”, Hulu live streaming was down for 3 hours during Friday primetime, and Skype’s login has been dodgier than usual for the last five days.  Everything is broken, or in various states of broken, ever since Google started their forever Beta program. *Amen*

Crichton cites some interesting research about the need to maintain existing software rather than just speeding ahead to the next thing.  He also points out that the massively complex systems we have put together are beyond our own ken, and small changes or failures can have a massive and totally unpredictable impact.  Apparently Charles Perrow, a professor at Yale, calls these normal accidents.  I prefer the more upbeat and Bob Ross inspired happy accidents. 

What are we to make steaming pile of technology that has been pooped out over the last ten years?  Do we hold our nose and smile?  Crichton brings up the fact that it is possible to make a highly available, and resilient complex system, such as modern US aviation.  That makes sense when people’s lives are literally on the line, and not so much when the stakes are whether or not I can view  pictures of tacocat whenever I want on my phone.  What is the motivation for a company to create product that is incredibly stable, at triple the cost, for none of the return?  Consumers have become completely numb to the constant broken state of their technology.  The only time it becomes patently obvious is when you try to teach a loved one how to use some new piece of gadgetry, only to discover how broken and non-intuitive the product is.  Apple’s no bastion of light these days, but I will hand it to the Apple of five years ago.  They made a rock solid product, and ruled the ecosystem with an iron fist, much to the appreciation of their stockholders.  That level of quality is, erm, lacking somewhat these days.

I don’t know of a simple way out it, but I think it will probably end up being AI.  Our systems have become so ludicrously complex, and unfathomably labyrinthine that mere mortals stand a snowball’s chance in hell of taming the beast.  I suspect that the only way that our foundations get shored up is if a neutral third-party with infinite patience, computational power, and memory is able to start fixing things for us.  The proverbial mother cleaning up the children’s’ mess so that we may make another mess tomorrow.

Or we’ll just accidentally create our own oblivion. I feel like the odds are pretty even.

VMware on Azure – You’re still doing it wrong

Sigh.  There’s an old adage that I always come back to.  Just because you can do something, doesn’t mean that you should.  In this case I am thinking about the recent announcement by Microsoft that Azure would be supporting bare metal deployments of VMware on Azure hardware.  In case you’ve been living under a rock, AWS went GA with a very similar offering back in late August.  Of course there are some specifics that differ, but the overall theme is the same.  You can run your VMware workloads in their public cloud on bare metal, but still have close proximity to their respective public cloud services.  Alas, just because it’s on Azure now, doesn’t make the idea any better, and I stand by my previous post.

Continue reading “VMware on Azure – You’re still doing it wrong”

VMware on AWS – You’re doing it wrong

This is going to be a controversial post I am almost certain.  Basically, I am going to argue that the whole premise behind running VMware on AWS is fundamentally flawed and not a viable strategy for those who are currently running VMware or for VMware itself as a company.  Get your angry comments ready, here we go!

Continue reading “VMware on AWS – You’re doing it wrong”

What’s in a Name?

As the raging dumpster fire that is the Equifax breach continues to unfold, I find that I am thinking about identity and the way we use it in our modern life.  Equifax was criminally negligent with information that was incredibly valuable to individuals.  They should be penalized as an organization with fines and levies, and some of the individuals within the company who were responsible for the security of our data should face possible jail time.  But when you step back for a moment, it becomes readily apparent that this is just the latest in a series of data breaches over the past decade, and despite fines, levies, and jail time; this is the sort of thing that is likely to happen again.  Why?  First, the monetary value of the information is high, meaning that criminal elements are willing to spend the resources to steal the information.  Second, organizations are rarely incentivized to take the necessary precautions to secure data.  As Greg Ferro likes to point out, as long as the cost of true security is higher than the cost of a breach, organizations are unlikely to adopt true security practices.  Third, even if an organization tries to embrace true security, human beings are fallible.  Applications have undiscovered exploits, misconfigurations happen, and hackers are always stepping up their game.

Continue reading “What’s in a Name?”

Dear Future HCI Partner

If there’s one thing I wish HyperConverged Infrastructure (HCI) vendors would stop doing, it’s promising that the product will be up and running “in a matter of minutes”.  First of all, it’s simply untrue.  Second, it’s irresponsible and sets those of us deploying the hardware up for failure.  When skewed perceptions intersect meaty reality, the deployment engineer is the first to be skewered.  And you know what else? Continue reading “Dear Future HCI Partner”