Jul
3

Getting Started With Virtualization: Five Fundamental Factors to Consider

Getting Started With Virtualization: Five Fundamental Factors to Consider

July 3
By

The virtualization trend has been steam rolling along for quite some time, but not all managed service providers have hopped on the bandwagon. Call these MSPs late to the party all you want, but you really can’t blame them. Adding a new IT solution to an existing portfolio of offerings is no easy feat. There are extra costs involved as well as additional management tasks and a host of new considerations to think about.

So how does one get started with this awesome virtual computing technology? By keeping factors like these in mind:

1. New Hardware : To Buy or Not to Buy

One of the main attractions of virtualization is the ability it provides to deploy multiple operating systems and applications without having to purchase new hardware. Having said that, some hardware isn’t fit for the job no matter how impressive the specifications. For example, a server that’s already running a high volume traffic website and a lot of large applications would not be a good choice for the simple fact that it’s resources are all tied up.

The good thing about virtualization is that the hardware requirements are pretty straightforward. With enough RAM and processing punch to support the individual guest environments you want to create, this technology will thrive on all types of hardware. That means new hardware can be optimized to deliver the performance of a dedicated server, and old hardware can be better utilized to consolidate resources and eliminate costs associated with maintenance or upgrades.

2. Storage Capacity

While virtuailzation offers the delicious taste of cost savings, those very savings can be gobbled up at the sacrifice of storage. Hypervisor makers like to tout about how fast and easy their software spits out new virtual machines, but all those VMs eat away at the same storage capacity. The more environments you need to isolate within that physical machine, the quicker you’ll end up eating all that space. Even external storage devices can result in added maintenance complexities when elements such as firmware, drivers and patches come into play.

Dealing with storage for a virtual infrastructure is one of the biggest management challenges IT administrators have on their plates. Managed service providers should look into cloud computing, storage appliances and other solutions to make sure they have the capacity to support a fully functioning virtual environment. If the storage challenges are not addressed, MSPs will be stuck with a complex solution almost completely devoid of the benefits they’re chasing.

3. Choosing Your Software

If you’re looking for capable software, it helps to know that there are several options available to you. However, this is a catch-22 of sorts because all those options can make choosing the right solution a maddening process. Not only do you have vendors like VMware, Microsoft and Red Hat, you’ve got their respective solutions that are marketed for servers, desktops, applications and a host of other niches. The best thing MSPs on the prowl can do is match up software with their specific needs and put a qualified IT specialist on the job of hunting it down.

4. Potential Overload

Every physical machine has its limitations, and even virtualization with all its great power can’t change that. While a single server could technically run hundreds of virtual machines, that’s not really an idea scenario, especially when that scenario involves catering to clients. Not only do you have your web servers, control panels, and database systems to account for, you’ve got the technology your clients want to incorporate, which may include a bundle of resource-intensive applications that demand a lot of the physical hardware. In short, be realistic about what your hardware can accommodate and resist the temptation of going overboard.

5. Probability of System Failure

Some businesses are deploying virtualization with the goal of lowering risks and increasing availability. It’s attainable, but poor execution can actually increase the probability of risks, particularly from a failure standpoint. Although the concept aims to aid hardware in maximizing its potential, the fact that you are asking it to take on a bigger physical load makes failure more of a factor. A disaster recovery plan that includes backing up and restoring data on virtual systems needs to be worked out to ensure that clients suffer through as few disruptions as possible.

A report by Reportstack forecasts that global virtualization software revenue will grow by nearly 20 percent from 2012 to 2016. I’d venture to guess that a huge chunk of that change will be pitched out by services providers that are either updating their technology or deploying it for the first time. Hopefully the first-timers realize the importance of getting off on the right foot.

[cf]skyword_tracking_tag[/cf]