If you were going to build a brand new application today, your approach would probably be fundamentally different than five or ten years ago. And I do mean fundamentally, as in the fundaments of the architecture would be different. In the last ten years we have moved rapidly from traditional three-tier applications to 12-factor apps using microservices, and now things are shifting again to serverless. That’s all well and good for any business looking to build a new application, but what about organizations that have traditional applications? I’ve also heard them called legacy or heritage applications. These applications are deeply ingrained in the business and are often what is actually generating the bulk of a company’s revenue. The company cannot survive without these applications, and modernizing them will be costly and fraught with risk. Due to the inherent risk, most companies opt to either keep these applications running on-premises or move them as-is to the public cloud, aka life and shift. That’s the reality we’re living with today, but tomorrow is knocking on the door and promising hybrid cloud to fix all this. What’s the reality and what’s the hype? And what is the most likely journey for most companies?
This week was AWS re:Invent, and I watched the keynote live-on Wednesday.
The three. hour. keynote.
During which Andy Jassy announced new features at a pace that is frankly astounding. Three hours should be too long for a keynote, and if I wasn’t watching from the comfort of my office, it would have been. Not only did the announcements keep unfolding for the full 270 minutes, but some didn’t even make it into the keynote. Running a three hour keynote is tough, creating enough new services and features to overflow a three hour keynote is amazing. My hat goes off to the engineering teams at AWS. It is truly staggering what you manage to accomplish each year.
Sigh. There’s an old adage that I always come back to. Just because you can do something, doesn’t mean that you should. In this case I am thinking about the recent announcement by Microsoft that Azure would be supporting bare metal deployments of VMware on Azure hardware. In case you’ve been living under a rock, AWS went GA with a very similar offering back in late August. Of course there are some specifics that differ, but the overall theme is the same. You can run your VMware workloads in their public cloud on bare metal, but still have close proximity to their respective public cloud services. Alas, just because it’s on Azure now, doesn’t make the idea any better, and I stand by my previous post.
This is going to be a controversial post I am almost certain. Basically, I am going to argue that the whole premise behind running VMware on AWS is fundamentally flawed and not a viable strategy for those who are currently running VMware or for VMware itself as a company. Get your angry comments ready, here we go!
This is a technical post for someone trying to reset a node or an entire HC250 appliance running VMware. This is specific to the latest release of the recovery software for the HC250 running ESXi 6.0 update 2. If you have followed the directions for restoring the node which are included in the HPE Hyper Converged 250 System for VMware vSphere User Guide then you will have downloaded the necessary files and created a USB drive to perform the node reset. And that’s where things start to fall apart. Continue reading “System Recovery for an HPE HyperConverged 250 running VMware”