Types of Virtualization In Modern IT: Why It Still Matters
Note: This article was originally published in 2015, under the title: The 7 Types of Virtualization. It has been updated to reflect advancements in technology. For other current technology topics, visit the Kelser Learning Center for our latest blog articles.
In tech and non-tech business realms alike, virtualization has been a buzzword of sorts for years.
In fact, we hear constantly about “virtual” this and that.
But what does this term actually mean?
Is it relevant today? Are there different types? How does it affect security?
Is it still a “thing”? The short answer to this one is a resounding “Yes!”
Virtualization is nothing less than a cornerstone of modern computing.
We answer questions just like this all the time here at Kelser. We are dedicated to working with business leaders like you to ensure you have the information you need to select technology solutions that support your business goals.
Read on to learn about some of the various kinds of virtualization out there and how they apply to common business needs. Based on what you learn, you will be able to talk intelligently about virtualization and understand how to make the most of it.
What is Virtualization, Really?
Virtualization in computing is a surprisingly simple concept…at least in theory.
The key concept to understand is virtualization uses software to control, split up, and even simulate the physical hardware on which it is running.
In modern virtualization systems, a piece of software, known as the Hypervisor, is in charge of the hardware. Everything else on the system can only see and interact with the Hypervisor.
The Hypervisor (or host) creates virtual hardware which is then presented to the systems (or guests) running on top of it. The Hypervisor then translates the guest’s requests over to the physical hardware.
Virtualization has historically been used to achieve enhanced performance of one or more of the following:
3 Types of Modern Virtualization
Three main types (or components) of modern virtualization come together to create the virtual machines and associated infrastructure that power the modern world.
1. Virtual Compute
When we talk about virtualized compute resources, we mean the CPU – the “brain” of the computer.
In a virtualized system, the Hypervisor software creates virtual CPUs in software, and presents them to its guests for their use. The guests only see their own virtual CPUs, and the Hypervisor passes their requests on to the physical CPUs.
The benefits of this approach are immense.
Because the Hypervisor can create more virtual CPUs (often creating many more than the number of physical CPUs), the physical CPUs spend far less time in an idle state, making it possible to extract much more performance from them.
In a non-virtualized architecture, many CPU cycles (and therefore electricity and other resources) are wasted because most applications do not run a modern physical CPU at full tilt all the time.
2. Virtual Storage
Allow me a brief history lesson, as modern virtualized storage has been built up through the ages.
In the beginning, there were disk drives. (Okay, maybe not the beginning, but bear with us for the sake of brevity.)
Later, there was RAID (Redundant Array of Independent Disks). RAID allowed us to combine multiple disks into one, gaining additional speed and reliability.
Later still, the RAID arrays that used to live in each server moved into specialized, larger storage arrays which could provide virtual disks to any number of servers over a specialized high-speed network, known as a SAN (Storage Area Network).
Nowadays, the arrays are starting to move back into the physical servers – what the industry calls Hyper-Converged Infrastructure – enabled by even faster network links.
All of these methods are simply different ways to virtualize storage.
Over the years, the trend has been to abstract away individual storage devices, consolidate them, and distribute work across them. The ways we do this have gotten better, faster, and become networked – but the basic concept is the same.
3. Virtual Networking
Network virtualization (or, software-defined networking) uses software to abstract away the physical network from the virtual environment.
The Hypervisor may provide its guest systems with many virtual network devices (switches, network interfaces, etc.) that have little to no correspondence with the physical network connecting the physical servers.
Within the Hypervisor’s configuration, there is a mapping of virtual to physical network resources, but this is invisible to the guests.
If two virtual computers in the same environment need to talk to each other over the network, that traffic may never touch the physical network at all if they are on the same host system.
In combination, these fundamental types of virtualization allow the creation of a variety of virtual machines – computers within computers.
Virtual machines allow businesses to run the same number of applications while reducing the amount of physical hardware (or number of actual physical servers) needed to run their software, saving money on energy, cables, hardware, rack space, and more.
What about the cloud?
Today, virtualization is more relevant than ever. It is how the “cloud” works.
The cloud takes the concept of virtualization to the next level and virtualizes practically everything. This allows for the realization of massive economies of scale and makes it possible to dial in performance to the exact needs of each customer.
Read this article for more information about the cloud: Cloud Migration: What It Means, How It Works (6 Questions To Ask)
Beyond the Basics – Application Virtualization, Containers, and Orchestration
Virtualization as we have discussed so far in this article has been de rigueur across our industry for nearly two decades.
Today, the concept is being extended even further. The latest way to deploy applications, and to really gain the maximum possible benefit from virtualized infrastructure is known as “containerization.”
A “container” is a bundle of files holding an application and an operating system – but not a complete one. The container includes only those parts of the operating system that the application requires to run.
The operating system hosting the container(s) (which may itself be a virtual machine) provides storage and networking services. The container is only responsible for running its one specific application.
This may sound like a lot of complication – why not just run the application directly on the virtual machine? The answer is that containers create consistency in deployment.
There is never a question of OS compatibility or of installing additional software. The app can be deployed and run instantly, with everything it needs, and this can be done over and over with the same results.
This is nice, but there is more.
The real power of containers comes from Orchestration.
By now, most people in or around the IT industry have heard of Kubernetes (or K8s), a container deployment and orchestration software. When deployed across a cluster of (usually virtual) servers managed by an orchestrator, containers become far more powerful.
The orchestrator can deploy multiple identical copies of a container and then load balance between them. For example, it can detect overloading or failure of component servers and move containers transparently over to healthy ones.
Should My Company Use Virtualization?
If you are investing in an infrastructure overhaul, virtualization is going to be a key part of your solution no matter what. To take full advantage of this technology, seriously consider moving to the cloud.
Unless you have recently invested a significant amount in on-premises hardware or have another reason to keep your servers on-site, the cloud provides the latest technology, available on-demand.
The decision about which solution makes sense for your organization should depend on your needs. All options should be on the table.
Consider the following questions:
- What is your current investment in on-premises hardware?
- What would happen to your business should that hardware fail? Do you have off-site backups and a disaster recovery plan?
- Where do you have IT pain points, and are they primarily hardware- or software-based?
Through a series of questions and analyses as well as some research about your options, you can make an informed decision about the best IT solution for your organization.
At Kelser, we provide the information you need to make an informed decision about which technology solution is right for your business. We provide a full complement of managed IT support services designed to free customers to focus on their business, while we focus on their IT needs.
We understand that our offerings, while comprehensive, may not be right for every organization, which is why we provide unbiased, informative articles like this one to explore important technology topics business leaders need to learn about and explore their options.
If you are considering managed IT support, check out our managed IT services and the offerings of several other organizations, so that you fully understand your options.
Want to know more about managed IT support? Read this article: What Is Managed IT? What’s Included? What Does It Cost?