Implement Virtualization for Application High-Availability

Alone, the term ‘high availability‘ does not mean your application never goes down, that would be ‘always on‘, but highly available applications require some simple things you might already have within your infrastructure. If you run a virtual environment, then achieving application high availability is just a few clicks or scripts away, if you choose to roll-your-own, but there are also products that one can purchase to do similar things such as Symantec’s Application HA package, a scaled down version of Veritas Cluster Server. But if what you need is “protection right now, today” then you can get started by using built-in technologies for a given hypervisor.

HA Process

Regardless of type of hypervisor you can enable guest failover, moving, takeover, replication, etc and can be done through a series of different steps and mostly involves mild scripting.

For all VM hosts there is some kind of scripting hook that can be used to achieve the following series of steps:

  1. Verify whether or not the Guest VM is on-line and booted
  2. Verify VM related settings and store/export them if needed
  3. Control the VM (quiesce, freeze, power-off, shutdown, etc) to prepare for movement
  4. Backup or copy the VM to new storage or a new host
  5. Restore VM related settings/import them if needed
  6. Spin up the VM
  7. Verify whether or not the Guest VM is on-line, booted, and active on the new host or location

These are the basic steps that can be achieved through scripting either by utilizing APIs for each hypervisor or manually performing each and every step through the use of standard OS level scripting automation. (logging in as a user, executing a power off or other set of commands to prepare the VM and logging in after the VM has been powered on again to validate it is up and running properly and start applications)

Developing this type of capability can be pretty easy but does take some time so be aware of any tools you might need but simply because the VMs exist in containers able to be moved pretty much anywhere on the same network segment or route, that is all that is required so long as the guest VM can run on the target platform. You of course may need to integrate your host monitoring software in order to execute the scripts automatically, but if you don’t have that level of monitoring in your environment you can script that too and all with assets you already own. The main investment is time and testing to ensure the environment is sound after a move/failover.

Express your thoughts and leave a comment!

 

Advertisements

Microsoft Windows Server 2012 Hyper-V Changes

Microsoft’s recent release of Windows Server has cutting edge changes to Hyper-V which will finally give Microsoft more confidence about espousing the virtues of Hyper-V and provide sales fodder for making a case against virtualizing with VMware. Microsoft Hyper-V has been completely rewritten in this new version of Windows Server and allows for things like simplified Live Migration, templating, multi-tenancy and deduplication technology just to name a few. Evidence of this first came out when Microsoft started publishing comparison documents showing these vast improvements between it’s own Windows Server 2008 R2 (at the time a huge improvement in Hyper-V in its own right) and the yet to be released upgrade Windows Server 2012.  This document, Windows Server Comparison, is a great example of how seriously Microsoft took its renewed fervor.

From the PDF for example:

Microsoft Hyper-V 2012 PDF Clip

 

As many therapists tell you when you’re looking to change something, look at what you need to fix first, then start working on competing. If you read the linked document you’ll see how serious Microsoft is. Additionally there is now a lot of competitive positioning coming from VMware directly against Microsoft if for no other reason, than because Microsoft is taking itself seriously unlike it did in the past with regard to Hyper-V. Microsoft has vastly improved, simplified, and consolidated the plethora of old Hyper-V related documentation into a much simpler, approachable and digestible form that is actually more than useful and understandable. It’s meaningful. Microsoft has sent a warning shot across the bow of all competitor virtualization platforms (in the x86/commodity space anyway) and is sending a beacon to all those in the current Windows Server customer base that they really need to warrant this new effort with some alacrity for the future of Windows Server and built-in virtualization.

The jury is still out since there is a lot of testing in the enterprise space yet to do from an administrative and operational perspective, but if their follow through on product is anything like their preparation has led us to believe, then technologies like VMware, Citrix, Xen, RedHat and others will indeed have something to worry about.

The only gap left to jump will be the concept of putting all your eggs into the Microsoft basket, but Microsoft is working on that too.

Creating a Virtualization Strategy

The use of virtualization is pretty well accepted these days and you find it pretty much everywhere, from the mid-range consumer PC to the high-end enterprises Virtualization is getting lots of play.

Using virtualization 10 or 12 years ago, yes that long ago, was almost bleeding edge and forget even suggesting running it in production. Most folks saw virtualization technologies as a nice to have not a need to have. More recently though data center managers started trying to squeeze ever more utilization out of their existing environments through hardware consolidation and scale-up efforts only to find that this was no magic bullet and required something to take the pain out of these massive projects and increase success rate of consolidation and scale-up/out implementation. Virtualization was indeed an answer but many who had considered it almost a toy were still unsure if it could handle the stress and load of “production” capacity and so only virtualized items they felt were less critical. This is where the recent tidal surge of virtualization has come from inducing a subsequent wave of “Cloud” technology hype.

Virtually There

Where some see a challenge others don’t see any. The use of virtualization is a huge boon to companies trying to squeeze every last penny out of their investments and long term costs but it requires care and feeding in a somewhat different way than non-virtualized infrastructure. The main reason is mathematics. When consolidating and a solution is sized properly, servers wind up being put onto a single pool of resources as virtual machines, or VMs, which require either the same or slightly less resources than their physical counterparts because all resources are used as a pool. The math problem comes from something call “over subscription” whereby a data center manager or administration team will intentionally add “too many” hosts for the available resources but still create the VMs with the “proper” resource sizing known to be needed in the physical world. This is basically a bet that not all the resources on a given host will need to be used at the same time and thus there will always be free resources available. Another part of the math is simply the number of servers now placed on the VM Host.

In a given scenario where 100 physical servers are consolidated down to 2 or 4 VM Hosts, those servers, if not reduced in total number from 100 will all be housed on the VM Host, but then additional services may be needed and result in new servers being created or “spun up” as we say in the biz. With over subscription this activity is almost limitless without safe-guards put in place. (like using a hierarchy of resource pools and assigning rules against them to stem the tide of over-subscription) Due to the capability of being able to oversubscribe though, data centers have a very rich, organically grown sprawl of virtual machines that ultimately increases the burden on IT staff to support such infrastructures.

Without seeing this challenge, you just sprung a trap on yourself. This is a common scenario in which folks who begin to enjoy the ease of use and increased capability find themselves. Servers, everywhere. This means that once servers go from physical to virtual that the DR, backup, and ultimately recovery and management scenarios must change as dramatically as the infrastructure.

Displacement

Another challenge that arises from all this consolidation and capability is more of a people problem. The problem of job displacement. Responsibilities that were once tied to a specific group or person within the IT staff now becomes more of a shared responsibility. The environment is sprawling with virtual server fauna if you will and everyone who works in the environment has to know more about what’s going on with each person involved in IT. Once this happens though and you now have a broad set of resources with which to manage the everything, the next step is often reduction of those resources because IT staff has become commoditized to a degree, depending on the level of specialization within the infrastructure of course.

Adopting this paradigm is a huge mistake.

The resources you have before you virtualize can be readily reassigned to support individual or groups of  business units, improve service times, increase profits, prevent outages, and many other crucial IT related functions. What often occurs though is the numbers look so good after consolidating is that someone in upper management often suggests saving “just a little more capital or OpEx” cost by headcount reduction. It’s a short-term win but a long-term loss since it is inevitable that the good ones will leave, and the mediocre will stay, and without significant improvement in human capital (i.e. training) businesses will often suffer from brain drain and their new found freedom, flexibility, and competitive edge will have been reduced significantly. Math strikes again.

Cloud … Could I?

Profit center, cost center, how about competitive center? Many companies have developed a way of utilizing virtualization as more than a means to an end, but rather an on-going process to accomplish many different kinds of goals and employ more than one kind of virtualization for the right reason to avoid putting all their eggs in one basket as a company but also to remain nimble in the face of an ever changing landscape. The hype calls it Cloud, I call it ubiquity. Omnipresent computing infrastructure that never goes down, never breaks, and never needs maintenance at a general level which also *never* shows signs of aging, bottlenecks, has unlimited capability for resiliency and scalability. That is this so called “cloud.” It doesn’t exist, anywhere, but we do have parts of it and someday soon, we might actually see a system which accomplishes this Omnipresent computing. One key component of that infrastructure is Automation. Virtualization infrastructures allow for this type of automation but on  the small scale without lots of human capital to manage it. Once *simple* automatic deployment and provisioning capabilities are widely available and *very easily* integratable regardless of the virtualization infrastructure, then we may get closer to this “Cloud.”

You can use virtualization today to garner in a new era of computing and that is the virtualization strategy you need to adopt in order to provide the level of service the needs of each business unit. First work on the Physical-to-Virtual migration and decide what services will get consolidated down to which virtual machines and keep the relative number of machines low but scale up their virtual beefiness. This will allow a greater degree of flexibility without the need for massive recovery infrastructure jobs, etc. Keeping the relative number of machines low will improve responsiveness to applications which reside on them as well as provide simplicity in management and allow easy use of technologies such as snapshots.

Next prepare for scalability by adopting a lego-block type of approach when budgeting for and building out the infrastructure. This will help to maintain the current resource utilization curve the environment has as well as keep performance in check since bottlenecks are well known after the P2V migration exercise you just performed. Use 10Gbps network connections where possible and try to keep the hardware footprint to a minimum to avoid large  amounts of capital asset depreciation.

(i.e. servers are commodity usually if built small enough, storage might even be the same way depending on the contents of your “lego-block.”)

The Kitchen Sink

Don’t forget Backup, DR, and remote access capabilities and technologies and process updates. As I already outlined, these are usually crucial and if you downsize physical servers and increase your capability you will rely on Backup, Recovery, DR and Remote Access a ton more than you ever did before. No consoles to log on, just remote terminals. Might as well find a way to add this to your arsenal of company competition capabilities. (say that five times fast)

Don’t forget the user-base. With all these changes they might start feeling like you don’t care about their needs and complain that “IT doesn’t listen to us” or that “IT is so stupid. They just took X server offline and that’s where Y app was” etc. User acceptance testing , UAT, is often critical in determining how successful IT is performing for business related goals, especially when you’re talking about a large effort like consolidation. Users are fickle and finicky, but if you listen to and include them in large transitions like this they might surprise you by actually helping IT efforts when virtualizing, consolidating, and recovering.

Don’t forget where your headed. Keeping to the plan once it is in progress is important and anyone will tell you that scope-creep is a huge problem, but so is simply losing focus on the end goal:

Transforming your existing environment from a cost center to a competitive center.

Keeping your focus and communicating through the transition as well as your long-term goals and strategy as a business unit like all the rest will help you get there, save the day once in awhile, and allow you to increase budget. This is of course so long as  you can show how the strategy allows the business to make more profit. Doing this will make everyone happy, and keep everyone else off your tail.

Don’t forget the desktops either! Virtualizing desktops is a great win for a company that wants to virtualize infrastructure but has to be done as a discreet process either before or after the major infrastructure changes for two reasons. VDI as it is know, can either cause the changes or be a product of the changes I’m writing about. How you proceed into that area is up to you but DR will get a whole lot easier if you do go down the VDI path.

Finally

We’re not talking keeping up with the Jones’s, we’re talking about going someplace the Jones’s aren’t even going. IT that helps the company make money because it is truly integrated into every facet of what the company is doing, where it makes sense of course, and introducing a level of resiliency, flexibility, and nimbleness that leaves a company moving ever forward and not stagnant and aging. It does mean a lot of changes, but it can be done, it takes time, effort, a lot of effort, even more effort, money, some more effort, a huge amount of communication, consulting time, buy-in, dedication and commitment, and good people to manage all those changes and provide the business with a good comfort level.

Did I mention effort? It can take months to roll out a highly virtualized environment in a large enterprise, so it should be looked at as a process, but just because it takes less time in a smaller environment doesn’t mean the same methodology shouldn’t apply. It does.

Plan, prepare, deploy, manage, upgrade, scale and repeat.

After all this one day you’ll ask yourself why you didn’t virtualize sooner.

 

 

How-To: Install Snow Leopard as a VM [UPDATED]

If you’re like me, you might be in your kick ass installation of “Windows 7” and think, “I miss *good multiple desktops* and my dashboard on the Mac. Well, this might just be your lucky day, for real. If you can get a hold of VMware player or Vmware Workstation 7, you can install OSX Snow Leopard to a virtual machine with this nifty guide.

Installing Snow Leopard as a virtual machine

I think this is a fairly simple low-difficulty sort of install for OSX without hacking things over and over. Keep in mind, this doesn’t mean you’ll have awesome performance, but it sounds like you’ll be mostly supported and it should work pretty well once installed. I’m looking forward to trying this myself!

[UPDATE]
I’ve successfully performed 2 installations on my HP 8530W system of SnowLeopard, but that’s technically another story.

I used this procedure from the article posted above:

  • Boot up your copy of VMWare Workstation 7, go to File >> Open, and select the .vmx file that you just downloaded.
  • Choose “Edit virtual machine settings”, go to CD/DVD (IDE), select “Use ISO image:”, and choose the Darwin_Snow.iso image you downloaded. AMD-based PCs should use the Darwin_Snow_Legacy.iso image instead.
  • Save your settings and the press the play button to boot the VM.
  • After booting, you may get some errors about BIOS reading, ignore them.
  • At the bottom of Workstation 7, there is a CD icon. Click that, go to settings, and it will bring you to the CD/DVD settings panel.
  • Once at the panel insert your Snow Leopard DVD and select “Use a physical drive: Auto Detect”
  • Save your settings.
  • Click the CD/DVD icon again and choose connect. The OSX installer should boot.
  • Run the installer and reboot your virtual machine. Be sure to reset your CD/DVD settings to the Darwin_Snow image. You may have to power cycle the machine a few (10) times before it boots properly.
  • Enjoy.
  • So for those of you hoping to see some OSX booting on your system, at least via the how-to up there, here’s some hope for you: