Implement Virtualization for Application High-Availability

Alone, the term ‘high availability‘ does not mean your application never goes down, that would be ‘always on‘, but highly available applications require some simple things you might already have within your infrastructure. If you run a virtual environment, then achieving application high availability is just a few clicks or scripts away, if you choose to roll-your-own, but there are also products that one can purchase to do similar things such as Symantec’s Application HA package, a scaled down version of Veritas Cluster Server. But if what you need is “protection right now, today” then you can get started by using built-in technologies for a given hypervisor.

HA Process

Regardless of type of hypervisor you can enable guest failover, moving, takeover, replication, etc and can be done through a series of different steps and mostly involves mild scripting.

For all VM hosts there is some kind of scripting hook that can be used to achieve the following series of steps:

  1. Verify whether or not the Guest VM is on-line and booted
  2. Verify VM related settings and store/export them if needed
  3. Control the VM (quiesce, freeze, power-off, shutdown, etc) to prepare for movement
  4. Backup or copy the VM to new storage or a new host
  5. Restore VM related settings/import them if needed
  6. Spin up the VM
  7. Verify whether or not the Guest VM is on-line, booted, and active on the new host or location

These are the basic steps that can be achieved through scripting either by utilizing APIs for each hypervisor or manually performing each and every step through the use of standard OS level scripting automation. (logging in as a user, executing a power off or other set of commands to prepare the VM and logging in after the VM has been powered on again to validate it is up and running properly and start applications)

Developing this type of capability can be pretty easy but does take some time so be aware of any tools you might need but simply because the VMs exist in containers able to be moved pretty much anywhere on the same network segment or route, that is all that is required so long as the guest VM can run on the target platform. You of course may need to integrate your host monitoring software in order to execute the scripts automatically, but if you don’t have that level of monitoring in your environment you can script that too and all with assets you already own. The main investment is time and testing to ensure the environment is sound after a move/failover.

Express your thoughts and leave a comment!

 

Microsoft Windows Server 2012 Hyper-V Changes

Microsoft’s recent release of Windows Server has cutting edge changes to Hyper-V which will finally give Microsoft more confidence about espousing the virtues of Hyper-V and provide sales fodder for making a case against virtualizing with VMware. Microsoft Hyper-V has been completely rewritten in this new version of Windows Server and allows for things like simplified Live Migration, templating, multi-tenancy and deduplication technology just to name a few. Evidence of this first came out when Microsoft started publishing comparison documents showing these vast improvements between it’s own Windows Server 2008 R2 (at the time a huge improvement in Hyper-V in its own right) and the yet to be released upgrade Windows Server 2012.  This document, Windows Server Comparison, is a great example of how seriously Microsoft took its renewed fervor.

From the PDF for example:

Microsoft Hyper-V 2012 PDF Clip

 

As many therapists tell you when you’re looking to change something, look at what you need to fix first, then start working on competing. If you read the linked document you’ll see how serious Microsoft is. Additionally there is now a lot of competitive positioning coming from VMware directly against Microsoft if for no other reason, than because Microsoft is taking itself seriously unlike it did in the past with regard to Hyper-V. Microsoft has vastly improved, simplified, and consolidated the plethora of old Hyper-V related documentation into a much simpler, approachable and digestible form that is actually more than useful and understandable. It’s meaningful. Microsoft has sent a warning shot across the bow of all competitor virtualization platforms (in the x86/commodity space anyway) and is sending a beacon to all those in the current Windows Server customer base that they really need to warrant this new effort with some alacrity for the future of Windows Server and built-in virtualization.

The jury is still out since there is a lot of testing in the enterprise space yet to do from an administrative and operational perspective, but if their follow through on product is anything like their preparation has led us to believe, then technologies like VMware, Citrix, Xen, RedHat and others will indeed have something to worry about.

The only gap left to jump will be the concept of putting all your eggs into the Microsoft basket, but Microsoft is working on that too.

Windows “Ocho” sales ache ahead of black friday

In a previous article I wrote right here about Windows 8’s interface and what to expect, there is much more evidence that Microsoft is indeed feeling the pinch.

An article over at Apple Insider highlights this phenomenon and comes after the recent departure of Steven Sinofsky who,until recently, headed the Windows division at Microsoft.

Many of my friends and colleagues have little understanding of the interface for Windows 8 as yet and those I have discussed the matter with all seem to have a wait and see approach, especially where Enterprise environments are concerned. Speaking of enterprise environments, most are just now rolling out Windows 7 due to the soon to be unavailable Windows XP which is still a staple of many environments today.

If businesses and enterprises can make heads or tales of Windows 8 then I predict the broad general user base of consumers and power users will too.

Until then, despite Microsoft’s wishes, we’ll all just have to wait and see.

Creating a Virtualization Strategy

The use of virtualization is pretty well accepted these days and you find it pretty much everywhere, from the mid-range consumer PC to the high-end enterprises Virtualization is getting lots of play.

Using virtualization 10 or 12 years ago, yes that long ago, was almost bleeding edge and forget even suggesting running it in production. Most folks saw virtualization technologies as a nice to have not a need to have. More recently though data center managers started trying to squeeze ever more utilization out of their existing environments through hardware consolidation and scale-up efforts only to find that this was no magic bullet and required something to take the pain out of these massive projects and increase success rate of consolidation and scale-up/out implementation. Virtualization was indeed an answer but many who had considered it almost a toy were still unsure if it could handle the stress and load of “production” capacity and so only virtualized items they felt were less critical. This is where the recent tidal surge of virtualization has come from inducing a subsequent wave of “Cloud” technology hype.

Virtually There

Where some see a challenge others don’t see any. The use of virtualization is a huge boon to companies trying to squeeze every last penny out of their investments and long term costs but it requires care and feeding in a somewhat different way than non-virtualized infrastructure. The main reason is mathematics. When consolidating and a solution is sized properly, servers wind up being put onto a single pool of resources as virtual machines, or VMs, which require either the same or slightly less resources than their physical counterparts because all resources are used as a pool. The math problem comes from something call “over subscription” whereby a data center manager or administration team will intentionally add “too many” hosts for the available resources but still create the VMs with the “proper” resource sizing known to be needed in the physical world. This is basically a bet that not all the resources on a given host will need to be used at the same time and thus there will always be free resources available. Another part of the math is simply the number of servers now placed on the VM Host.

In a given scenario where 100 physical servers are consolidated down to 2 or 4 VM Hosts, those servers, if not reduced in total number from 100 will all be housed on the VM Host, but then additional services may be needed and result in new servers being created or “spun up” as we say in the biz. With over subscription this activity is almost limitless without safe-guards put in place. (like using a hierarchy of resource pools and assigning rules against them to stem the tide of over-subscription) Due to the capability of being able to oversubscribe though, data centers have a very rich, organically grown sprawl of virtual machines that ultimately increases the burden on IT staff to support such infrastructures.

Without seeing this challenge, you just sprung a trap on yourself. This is a common scenario in which folks who begin to enjoy the ease of use and increased capability find themselves. Servers, everywhere. This means that once servers go from physical to virtual that the DR, backup, and ultimately recovery and management scenarios must change as dramatically as the infrastructure.

Displacement

Another challenge that arises from all this consolidation and capability is more of a people problem. The problem of job displacement. Responsibilities that were once tied to a specific group or person within the IT staff now becomes more of a shared responsibility. The environment is sprawling with virtual server fauna if you will and everyone who works in the environment has to know more about what’s going on with each person involved in IT. Once this happens though and you now have a broad set of resources with which to manage the everything, the next step is often reduction of those resources because IT staff has become commoditized to a degree, depending on the level of specialization within the infrastructure of course.

Adopting this paradigm is a huge mistake.

The resources you have before you virtualize can be readily reassigned to support individual or groups of  business units, improve service times, increase profits, prevent outages, and many other crucial IT related functions. What often occurs though is the numbers look so good after consolidating is that someone in upper management often suggests saving “just a little more capital or OpEx” cost by headcount reduction. It’s a short-term win but a long-term loss since it is inevitable that the good ones will leave, and the mediocre will stay, and without significant improvement in human capital (i.e. training) businesses will often suffer from brain drain and their new found freedom, flexibility, and competitive edge will have been reduced significantly. Math strikes again.

Cloud … Could I?

Profit center, cost center, how about competitive center? Many companies have developed a way of utilizing virtualization as more than a means to an end, but rather an on-going process to accomplish many different kinds of goals and employ more than one kind of virtualization for the right reason to avoid putting all their eggs in one basket as a company but also to remain nimble in the face of an ever changing landscape. The hype calls it Cloud, I call it ubiquity. Omnipresent computing infrastructure that never goes down, never breaks, and never needs maintenance at a general level which also *never* shows signs of aging, bottlenecks, has unlimited capability for resiliency and scalability. That is this so called “cloud.” It doesn’t exist, anywhere, but we do have parts of it and someday soon, we might actually see a system which accomplishes this Omnipresent computing. One key component of that infrastructure is Automation. Virtualization infrastructures allow for this type of automation but on  the small scale without lots of human capital to manage it. Once *simple* automatic deployment and provisioning capabilities are widely available and *very easily* integratable regardless of the virtualization infrastructure, then we may get closer to this “Cloud.”

You can use virtualization today to garner in a new era of computing and that is the virtualization strategy you need to adopt in order to provide the level of service the needs of each business unit. First work on the Physical-to-Virtual migration and decide what services will get consolidated down to which virtual machines and keep the relative number of machines low but scale up their virtual beefiness. This will allow a greater degree of flexibility without the need for massive recovery infrastructure jobs, etc. Keeping the relative number of machines low will improve responsiveness to applications which reside on them as well as provide simplicity in management and allow easy use of technologies such as snapshots.

Next prepare for scalability by adopting a lego-block type of approach when budgeting for and building out the infrastructure. This will help to maintain the current resource utilization curve the environment has as well as keep performance in check since bottlenecks are well known after the P2V migration exercise you just performed. Use 10Gbps network connections where possible and try to keep the hardware footprint to a minimum to avoid large  amounts of capital asset depreciation.

(i.e. servers are commodity usually if built small enough, storage might even be the same way depending on the contents of your “lego-block.”)

The Kitchen Sink

Don’t forget Backup, DR, and remote access capabilities and technologies and process updates. As I already outlined, these are usually crucial and if you downsize physical servers and increase your capability you will rely on Backup, Recovery, DR and Remote Access a ton more than you ever did before. No consoles to log on, just remote terminals. Might as well find a way to add this to your arsenal of company competition capabilities. (say that five times fast)

Don’t forget the user-base. With all these changes they might start feeling like you don’t care about their needs and complain that “IT doesn’t listen to us” or that “IT is so stupid. They just took X server offline and that’s where Y app was” etc. User acceptance testing , UAT, is often critical in determining how successful IT is performing for business related goals, especially when you’re talking about a large effort like consolidation. Users are fickle and finicky, but if you listen to and include them in large transitions like this they might surprise you by actually helping IT efforts when virtualizing, consolidating, and recovering.

Don’t forget where your headed. Keeping to the plan once it is in progress is important and anyone will tell you that scope-creep is a huge problem, but so is simply losing focus on the end goal:

Transforming your existing environment from a cost center to a competitive center.

Keeping your focus and communicating through the transition as well as your long-term goals and strategy as a business unit like all the rest will help you get there, save the day once in awhile, and allow you to increase budget. This is of course so long as  you can show how the strategy allows the business to make more profit. Doing this will make everyone happy, and keep everyone else off your tail.

Don’t forget the desktops either! Virtualizing desktops is a great win for a company that wants to virtualize infrastructure but has to be done as a discreet process either before or after the major infrastructure changes for two reasons. VDI as it is know, can either cause the changes or be a product of the changes I’m writing about. How you proceed into that area is up to you but DR will get a whole lot easier if you do go down the VDI path.

Finally

We’re not talking keeping up with the Jones’s, we’re talking about going someplace the Jones’s aren’t even going. IT that helps the company make money because it is truly integrated into every facet of what the company is doing, where it makes sense of course, and introducing a level of resiliency, flexibility, and nimbleness that leaves a company moving ever forward and not stagnant and aging. It does mean a lot of changes, but it can be done, it takes time, effort, a lot of effort, even more effort, money, some more effort, a huge amount of communication, consulting time, buy-in, dedication and commitment, and good people to manage all those changes and provide the business with a good comfort level.

Did I mention effort? It can take months to roll out a highly virtualized environment in a large enterprise, so it should be looked at as a process, but just because it takes less time in a smaller environment doesn’t mean the same methodology shouldn’t apply. It does.

Plan, prepare, deploy, manage, upgrade, scale and repeat.

After all this one day you’ll ask yourself why you didn’t virtualize sooner.

 

 

Look Here: Windows 8 has arrived … sort of.

Depending on your affinity for different technology in today’s cultural environment you will have a different take on Windows 8.

If you like using windows day to day I think you will either laud it as a more usable and accessible set of features or be left wondering what happened to your beloved Windows desktop.

There is actually a lot to like about Windows 8 and the “metro” start panel, but it is *quite* disorienting for those not up to speed with the “latest advancement” in Windows.

Gartner/CNet/ZDnet have all published articles to echo this sentiment, like This Story.

Split Brain Syndrome

The massive change for Windows stems from the fact that Microsoft has finally figured out it’s strategy for the “post PC era” of computing with release of the recent Surface Tablet.

You see Windows 8’s start panel is much easier for those of us who use tablets, or desire to, as the primary computing device in every day work as the moniker “post PC era” is meant to convey. Then Windows 8 also has the ability to deliver to those users who need it a windows desktop, use the explorer, etc. On top of those two integrated interfaces there is sort of a third interface which is the window/task management methodology which also has two possible methods of access. This can leave some users wondering what those at Microsoft were thinking but if you consider the touchy/feely interface of a tablet, this makes a lot of sense (no pun intended, but probably should’ve been).

All this tablet-ready integration is great but one thing Microsoft is also trying to accomplish is to push their methods of access in one direction, to the tablet.

They’re surrounded by challenges, enemies if you will, all eating at their precious PC base. It’s a beat ’em or join ’em kind scenario but microsoft has employed the one thing they are really good at when “joining them”, their “embrace and extend” mantra. It remains to be seen if this will be sustainable in the long term, but for now, Microsoft is trying to at least get to the playing field to see how level it already is without their participation. They’re still climbing in my opinion.

Ease Your Transition

One thing I have experience that really helps transitioning into Windows 8 is using a multitouch trackpad such as Apple’s Magic Trackpad. This one thing alone can greatly increase the usability because it helps consolidate the functionality of the touchy/feely interface to help reduce the Split Brain interface problems I outlined. I have found that using a multitouch interface makes everything much better. One problem though is certain kinds of power users and games which also require a mouse. So in order for a standard PC to truly take advantage of things you now need 3 interface devices, a Keyboard, a mouse, and a multitouch surface.

So, what to do?

1. Go with the “Surface” or add hardware?

2. Jump ship (get a Mac and thereby upgrade hardware, UI, etc.)

3. Wait and see

4. Hack windows 8 into looking like Windows  7

Good luck everyone.

Let me know how you like Windows 8 and your experiences with it in the comments!

Why multi-booting Android on PCs is wrong headed.

The Android operating system has a lot going for it and has become a very useful player on the mobile device landscape. With recent advances leading up to the 3.0 (honeycomb) version, the OS has been able to go from it’s simpler, sometimes shoe-horned type of one size fits all, into a one size that fits all because it is truly meant to, sort of.

Now that 3.0 has arrived, and has finally been delivered to consumers on tablets, phones, and other interesting mobile devices, many companies are even suggesting it should boot on PCs too. This is truly a mistake. It’s fine for hobbyists to have access to the OS to boot on their PCs, but it would be much the same to dual-boot, say … WebOS on a PC. Really kind of pointless except for marketers at these companies to say “we have our own flavor of Android or some OS and it comes with every PC we sell.” It’s a good marketing game, but in the end it offers no true benefit to consumers as nothing has changed with the delivery of this OS except more pre-used space on a bundled PC from a branded PC maker.

Adding touch capabilities definitely makes Android 3.0 much better and up to date, as well as all the other speed, graphics, and many other optimizations, like increased hardware support, etc. that this new version offers. So you’d think that dual booting in a touch screen PC like the HP TouchSmart might make at least a little more sense. Personally I still say no, not even for the recently acquired WebOS.

The reason I’m saying this is wrong is very simple indeed, and it comes down to one word: Revolution

The addition of such OSes is just that. Extra. Fluff. Flab. Superfluous. Much like our appendix has been considered in our bodies, overall doesn’t make much sense, causes us problems if it gets messed up, and ultimately has to be removed if it does cause problems. Dual booting PCs to the other OS that customers didn’t order will surely expose them to it, but the problem is are we exposing them much like those who ride on a packed subway car near someone with the flu or like our children when we take them to a museum and provide a lot of explanation and hand holding.

Adding another OS to the PC landscape is a great idea, but an idea which has surely been rushed. Consumers and PC makers will both have to make an adjustment to the OS once this starts happening, but if it fails in the slightest, this type of integration will be viewed as a failure the same that most of us still run Windows and Mac OS today, and not Linux. That’s not to say that North America doesn’t have a pretty large base of Linux desktops. But the average consumer will stick with familiarity and ease of use, often one in the same.

So what can these companies do to get it right? I think it’s simple, but I’ll say it here: Start a revolution. Don’t just say how cool this OS is, or what it can do, but use that old Apple motto to seize the opportunity and “think different.” Go another direction and actually innovate, invent, and discover things that you can offer to people to make our lives easier. Not this “add a different topping on the same sandwich” approach and call it better or disguise it as choice. It isn’t. I don’t usually rant too much about this kind of stuff, but like so many example before, we live in a time when there is another opportunity to change how people that use computers and devices think or how those that don’t might be enticed to. Alternatively those people will change anyway and these companies will be left behind trying to catch up, a decade later.

The ball is now in your court HP, DELL, IBM, SAMSUNG, MOTOROLA, and the countless others out ere trying this approach. Bring real changes, or the world will change without you.

[UPDATE] Mac Office 2011 SP1 not installing on some systems?

It appears that some installations of Microsoft Office for Mac 2011 are either bitten by a bug, or the SP1 installer is faulty in that both the DMG and the auto-update versions fail to install stating the proper software was not found for updating. I attempted this on an iMac and hackintosh pc with 10.6.3 and 10.6.7 with the same result. I have previously updated Office 2011 without issue, but this SP1 seems to be broken. Has anyone else experienced this?

[Update 1]

I found this very (un)-helpful link that also shows this is happening to others:

http://answers.microsoft.com/en-us/office/forum/officeversion_other-office_install/office-mac-2011-unable-to-install-sp1-update/40d29dc1-2045-48b0-baf9-39e03689aac0