Any IT admin with some common sense will try to virtualise as much of their infrastructure and applications as possible. Greater agility, lower costs and risks, support for newer technologies like automation and machine learning—what’s not to like? Any IT admin who’s tried or even contemplated the practicalities of whole-of-environment virtualisation, however, will have a lot of answers to that question.
The thing is, as admins know, virtualisation is hard, and it’s getting harder. Once, all you had to deal with were VMs—the “devil you know” within any IT environment. Then came application virtualisation, and broader workload virtualisation, and containers. Today, just about every aspect of the IT ecosystem can be virtualised to some extent. That gives IT admins greater power over their workloads than ever before, but we also know that with great power comes great irritability.
Don’t get me wrong. Every IT leader should aim for full-scale virtualisation. The benefits to operations and innovation are not only substantial, but also vital for businesses to maintain a competitive footing. The question is: how can on-the-ground IT admins get to that 100%-virtual point without getting caught up in a web of costs, risks, and complexity that turn even the friendliest neighbourhood workload into a monstrosity?
Unless your business possesses little in the way of legacy IT infrastructure, trying to achieve total virtualisation all at once will only prove to be hugely expensive and highly stressful. You’ll most likely want to take a more segmented approach by virtualising your infrastructure and workloads in stages—which makes having a decent plan of paramount importance.
Start by building a comprehensive picture of your current workloads: their resource consumption, their requirements (like availability), how they interact with your infrastructure and one another, and so on. Where possible, input from CIOs on upcoming workloads and deployments will help give that picture greater longevity.
The planning process will also help quantify current levels of virtualisation, which sets the stage for the next task: identifying what needs to be done. IT admins will need to make choices, often difficult ones, over which virtualisation approach to take for different workloads or applications. For some, containerisation might make sense, particularly if you’re thinking of porting the workloads into the cloud or another location at some point. Others may prove too complex or widely distributed for containerisation, requiring dedicated VMs or migration to the cloud. For certain workloads, like HPC ones or those with high security and compliance requirements, virtualisation may only be possible at the server level, or not at all. The process resembles a Choose Your Own Adventure book, except “flipping back” from a bad choice often requires sizable penalties in time, money, and morale. No pressure.
The final planning stage involves counting the cost—which is where most IT admins will find themselves forced to rein in their ambitions. Moving all your workloads to the cloud, while tempting, will almost always exceed your budget more than the latest DC live-action film. Your choice of hypervisors and other platforms will not only affect workload performance and scalability, but also the talent you’ll need for their management, whether by hiring third-party experts or investing your internal team’s skill-points in a new discipline. Factor in licensing, planned downtime, and new hardware, and you’ll find the costs of this digital transformation—because really, that’s what it is—need to be balanced with what’s both possible and acceptable to the bigger organisation.
All this goes to show that 100% virtualisation will undoubtedly take time. But that time is much better spent in planning and preparation than on going for a “big bang” execution and cleaning up the pieces afterwards. IT admins can make use of various tools to render this planning process more accurate. Vendor-specific tools can help estimate the costs of moving different workloads, with different current and future requirements, to the cloud or into containers on certain infrastructure. Other tools allow for forecasting and optimisation of the bare-metal resources under any virtual machines or containers, both before and after the transition takes place.
At some point, however, IT admins will have to take the leap and execute their strategy—and when that happens, it’s best to move as hard and as fast as possible. Whether the plan involves virtualising certain workloads in stages or transforming much larger parts of the IT environment in one go, admins should seek to make the transition as quickly as possible to minimise the downtime and security risks to their organisations.
While that might sound terrifying, it’s nothing any IT admin with some experience should fear. Most of the issues that might occur—availability and compatibility issues, “bad actor” workloads contaminating other elements, hypervisors or other software not loading as they should—are, at their core, the same as those faced in any traditional IT cutover scenario. And the fixes—testing your failovers, double-checking backups, even choosing a safe window to make the transition—are equally common sense. Going 100% virtualised on your workloads and infrastructure isn’t just possible, it’s pragmatic, as long as IT brings good sense and a good plan to the table.