Five Arguments Against Virtualization – And Why They Are Wrong
There are many misconceptions about virtualization and its effect on applications and the existing infrastructure. Most of these misconceptions are the result of bad information, not enough information, or simply preconceived notions that are not founded in reality. This post will separate some of the fact from fiction on five of the biggest arguments against virtualization.
X Application is too Large to Virtualize
This is an old argument that persisted from previous versions of VMware’s hypervisor architecture. In the past, virtual hardware was far more limiting than it is in vSphere 5. Microsoft’s maximum recommended hardware setting for an Exchange server, for example, is 24 CPUs. This would allow you to run a single role Exchange server or multiple roles.
Now, the 24 CPU system would be able to manage up to 17,000 mailboxes. In vSphere 5, virtual machines (VMs) can be configured with up to 32 vCPUs. They also can be configured with up to 1TB of RAM, meaning that even the largest databases could run in a virtualized environment. vSphere 5 also allows for prioritizations of storage I/O to certain VMs.
Virtualized Servers Run Slower Than Their Physical Counterparts
There are a percentage of people who assume that virtualization means that things don’t run the way they would in a physical environment. One of the largest misconceptions is that you are virtualizing all hardware — that all hardware is being built in software — so, of course, it will run slower as a result of the translation, just like hardware emulation. Actually, VMware was acutely aware of the problems of emulating hardware, so they opted not to use that as part of the virtualization technology behind vSphere.
Typically, when users and businesses assume that virtualization just causes things to run slowly as a nature of virtualization, there is some level of misconfiguration on multiple levels. One of the biggest reasons for moving to a virtual environment in the first place is often the largest cause for poor performance: consolidation. But when consolidation takes precedence and no concern is given to resource scheduling, that’s when problems arise.
There are many settings that can be used to tune the performance of a virtual environment such as resource pools and setting shares, reservations, and limits for VMs and resource pools.
Virtual CPUs are one of the most misunderstood and most misconfigured. VMware does offer the ability to create Symmetric Multi-Processing (SMP) VMs, or VMs with multiple virtual CPUs, but these should be used sparingly, and only if the application is specifically written for multi-threading.
Other things that could be done to improve performance are pretty broad. For example, simply placing VMs on different datastores could provide better performance, especially if one of the datastores is over utilized. Perhaps the Redundant Array of Independent Disks (RAID) level of the storage device is causing performance issues. Disk caching also should be enabled in almost every case, unless the array uses solid state disks (because in that case, it is faster to write to the disk than it is to the cache).
Most Applications Aren’t Supported in a Virtual Environment
This statement is completely unfounded, as VMware has a large number of Independent Software Vendors or ISVs that support their applications in a VMware virtual environment. In fact, more than 1,400 ISVs support more than 2,400 applications. If customers have an application that isn’t currently supported, they can go to VMware’s ISV portal to request an application get supported.
Of course, most applications will run just fine in a VM and, in some instances, faster than they did natively. The big applications are supported, such as Exchange, SQL, and SAP.
You Shouldn’t Virtualize Your Domain Controllers
There is some uncertainty when it comes to something as mission critical as Domain Controllers (DCs). There are many instances of organizations moving their entire Active Directory structure into a VMware environment with few, if any, issues. The key to successfully deploying any service in a virtual environment is proper configuration and testing.
There is a big fear of failure: DC failure, physical ESX Server host failure, etc. If the ESX Server hosts are in a VMware HA Cluster, the DCs are protected not only from physical hardware failure but also from operating system crashes. Lastly, if the entire infrastructure were to go down all at once due to a major outage, the DC VMs can be configured so that they can only be on one ESX Server host or, at most, a couple. That way, if the entire infrastructure needs to be restarted, the IT staff will already know what ESX Servers to start first.
You Shouldn’t Virtualize Microsoft Clustering Services (MSCS)
The biggest reason that people seem to argue against virtualizing MSCS is that there is a fear of Distributed Resource Scheduler, or a regular person performing a vMotion migration of one of the Clustered Servers. The funny thing about this misconception is that VMs that are in a Microsoft Cluster relationship with another virtual or physical machine cannot be migrated with vMotion. This is because the Quorum Disk and Shared Disks are shared between clustered servers, and you cannot vMotion VMs with shared disks. In the event of a DRS cluster, there is a setting to disable automatic migration of selected VMs.
For VMs not requiring multiple virtual CPUs, it could be easier to achieve the high availability associated with MSCS through VMware’s Fault Tolerance and achieve zero downtime and zero data loss.
There are increasingly more reasons to move to virtualization than not. With higher levels of performance than ever before, comprehensive DR functionality, and the ability to prioritize high-profile VMs versus lower important VMs, VMware is the industry leader for virtualization.