Apple’s Mapocalypse continues: Google Maps on iOS

Last night, late, just before Midnight EST, Apple graciously allowed Google’s Maps application through the firewall to the general App store consuming public.

One key addition of course is the turn-by-turn voice guided directions which is very similar to Apple’s own maps, with 3D buildings, 2D/3D views (though not *the same* as Apple’s built-in maps), public transit, street view, and even more features honestly. Of course one of the nice things in this maps app, if the fact it’s here isn’t enough for you, is that you also have access to all those awesome search capabilities that Google just brings by default.

Lastly, Google Maps for iOS also provides one thing not seen before and that is synchronization between devices which is something that Apple has yet to provide regarding maps specifically. Overall, I think this is a massive thumbs up and I can’t tell you how much nicer it is down in manhattan than Apple Maps. No really, I can’t, I haven’t tried it in-person yet, but I can tell you addresses in Manhattan at least show exactly where things should be correctly. That feature, as any Australian will tell you is indeed priceless.

One final note on the Apple Maps subject, I still like Apple Maps for some very good reasons, One being it integrates directly with Siri and I happen to love using Siri for many things. I also like their 3d version much better than Google Maps, but when it comes to being accurate, sometimes, it’s really really important to get it right the first time. I think the gauntlet has been thrown and answered reluctantly, now to see if both Google and Apple are up to the challenge of cooperating.

Get your Google Maps in the iTunes Store.

Also, others have written cool stories, here’s one that I’m sure a lot of you will like from “Life Hacker.”

Implement Virtualization for Application High-Availability

Alone, the term ‘high availability‘ does not mean your application never goes down, that would be ‘always on‘, but highly available applications require some simple things you might already have within your infrastructure. If you run a virtual environment, then achieving application high availability is just a few clicks or scripts away, if you choose to roll-your-own, but there are also products that one can purchase to do similar things such as Symantec’s Application HA package, a scaled down version of Veritas Cluster Server. But if what you need is “protection right now, today” then you can get started by using built-in technologies for a given hypervisor.

HA Process

Regardless of type of hypervisor you can enable guest failover, moving, takeover, replication, etc and can be done through a series of different steps and mostly involves mild scripting.

For all VM hosts there is some kind of scripting hook that can be used to achieve the following series of steps:

  1. Verify whether or not the Guest VM is on-line and booted
  2. Verify VM related settings and store/export them if needed
  3. Control the VM (quiesce, freeze, power-off, shutdown, etc) to prepare for movement
  4. Backup or copy the VM to new storage or a new host
  5. Restore VM related settings/import them if needed
  6. Spin up the VM
  7. Verify whether or not the Guest VM is on-line, booted, and active on the new host or location

These are the basic steps that can be achieved through scripting either by utilizing APIs for each hypervisor or manually performing each and every step through the use of standard OS level scripting automation. (logging in as a user, executing a power off or other set of commands to prepare the VM and logging in after the VM has been powered on again to validate it is up and running properly and start applications)

Developing this type of capability can be pretty easy but does take some time so be aware of any tools you might need but simply because the VMs exist in containers able to be moved pretty much anywhere on the same network segment or route, that is all that is required so long as the guest VM can run on the target platform. You of course may need to integrate your host monitoring software in order to execute the scripts automatically, but if you don’t have that level of monitoring in your environment you can script that too and all with assets you already own. The main investment is time and testing to ensure the environment is sound after a move/failover.

Express your thoughts and leave a comment!

 

Microsoft Windows Server 2012 Hyper-V Changes

Microsoft’s recent release of Windows Server has cutting edge changes to Hyper-V which will finally give Microsoft more confidence about espousing the virtues of Hyper-V and provide sales fodder for making a case against virtualizing with VMware. Microsoft Hyper-V has been completely rewritten in this new version of Windows Server and allows for things like simplified Live Migration, templating, multi-tenancy and deduplication technology just to name a few. Evidence of this first came out when Microsoft started publishing comparison documents showing these vast improvements between it’s own Windows Server 2008 R2 (at the time a huge improvement in Hyper-V in its own right) and the yet to be released upgrade Windows Server 2012.  This document, Windows Server Comparison, is a great example of how seriously Microsoft took its renewed fervor.

From the PDF for example:

Microsoft Hyper-V 2012 PDF Clip

 

As many therapists tell you when you’re looking to change something, look at what you need to fix first, then start working on competing. If you read the linked document you’ll see how serious Microsoft is. Additionally there is now a lot of competitive positioning coming from VMware directly against Microsoft if for no other reason, than because Microsoft is taking itself seriously unlike it did in the past with regard to Hyper-V. Microsoft has vastly improved, simplified, and consolidated the plethora of old Hyper-V related documentation into a much simpler, approachable and digestible form that is actually more than useful and understandable. It’s meaningful. Microsoft has sent a warning shot across the bow of all competitor virtualization platforms (in the x86/commodity space anyway) and is sending a beacon to all those in the current Windows Server customer base that they really need to warrant this new effort with some alacrity for the future of Windows Server and built-in virtualization.

The jury is still out since there is a lot of testing in the enterprise space yet to do from an administrative and operational perspective, but if their follow through on product is anything like their preparation has led us to believe, then technologies like VMware, Citrix, Xen, RedHat and others will indeed have something to worry about.

The only gap left to jump will be the concept of putting all your eggs into the Microsoft basket, but Microsoft is working on that too.

Apple’s MapGate: Fallout

The person in charge at Apple for their latest iteration of Maps has been let go as per This Story, but there is still a long way to go until “Maps” is ready in my opinion. Here is my first-hand account of using maps, when compared to many others, of which I’ll be posing a comparative later this week or early next week about several mapping apps.

First, folks complained about [Apple Maps] 3D version being all screwed up, and I mean yeah, those kinds of things suck, but I’ve had a lot of problems where I expect maps to be able to tell me and it hasn’t been working. In the field, like in Manhattan, there are *tons* of problems (Penn Plaza anyone?) and in NJ while looking for a gas station, or something like it, has been a huge pain. i.e. BP is a gas station, and a business. Looking for corporate headquarters told me it was a gas station, while the gas station was in-fact the headquarters. Trying to meet someone for a meeting when time is tight has caused me some hair-raising moments with regard to timeliness.

Simple, stupid, blatant issues like these, of which there are tens of thousands at least, (look how long google has been at it and there are still mistakes), not to mention in NYC you no longer have the subway schedules, in NJ no more NJT schedules(sometimes, depends on the day 😉 ), poor directions for walking versus driving, lack of efficiencies (telling you to drive up 3 miles to find someplace to turn around to come back almost 3 miles to the “correct side of the highway” so you can go the right way when you could’ve gone .1 mile south and done the same thing.)

I’ve been giving it a fair shot and driving on major roads has been fine, but it’s going to all those places not on the main strip that tends to suck and when you sold however many *million* phones, you have that many users with a potentially crappy situation where maps is concerned. At the very least a huge embarrassment for Apple, at the worst lost mind share for the quality they hold dear resulting in lost customers or revenue streams from partners.

When you look at the immediate impact they have by rolling out a sub-standard maps application like they did there are very real business implications from reputation to valuation and that is why Williamson was let go, not just that “maps sucked too bad.”

It’s because of these issues I still have 3 other mapping tools on my iphone and 4 if you count google maps web link!

FaceTime on unlimited

FaceTime over cellular has long been a sticking point for many of us on AT&T but as of the last few days there appears to be some sort of lift on the ban they had previously held. Reports have been coming in from several sources around the Internet. Here is one example, a tweet by @macrumors:

@MacRumors: AT&T Seems to Be Extending FaceTime-Over-Cellular to All Customers http://t.co/MgvAyMaT

I myself have tested this on my iPhone 5 on AT&T and it works after a reboot and you must be on ios6.0.1 or higher!

Update: while this was working yesterday, this feature is again restricted today. That’s either really stupid or just sad. Anyone else see it work and disapear?

Windows “Ocho” sales ache ahead of black friday

In a previous article I wrote right here about Windows 8’s interface and what to expect, there is much more evidence that Microsoft is indeed feeling the pinch.

An article over at Apple Insider highlights this phenomenon and comes after the recent departure of Steven Sinofsky who,until recently, headed the Windows division at Microsoft.

Many of my friends and colleagues have little understanding of the interface for Windows 8 as yet and those I have discussed the matter with all seem to have a wait and see approach, especially where Enterprise environments are concerned. Speaking of enterprise environments, most are just now rolling out Windows 7 due to the soon to be unavailable Windows XP which is still a staple of many environments today.

If businesses and enterprises can make heads or tales of Windows 8 then I predict the broad general user base of consumers and power users will too.

Until then, despite Microsoft’s wishes, we’ll all just have to wait and see.

Creating a Virtualization Strategy

The use of virtualization is pretty well accepted these days and you find it pretty much everywhere, from the mid-range consumer PC to the high-end enterprises Virtualization is getting lots of play.

Using virtualization 10 or 12 years ago, yes that long ago, was almost bleeding edge and forget even suggesting running it in production. Most folks saw virtualization technologies as a nice to have not a need to have. More recently though data center managers started trying to squeeze ever more utilization out of their existing environments through hardware consolidation and scale-up efforts only to find that this was no magic bullet and required something to take the pain out of these massive projects and increase success rate of consolidation and scale-up/out implementation. Virtualization was indeed an answer but many who had considered it almost a toy were still unsure if it could handle the stress and load of “production” capacity and so only virtualized items they felt were less critical. This is where the recent tidal surge of virtualization has come from inducing a subsequent wave of “Cloud” technology hype.

Virtually There

Where some see a challenge others don’t see any. The use of virtualization is a huge boon to companies trying to squeeze every last penny out of their investments and long term costs but it requires care and feeding in a somewhat different way than non-virtualized infrastructure. The main reason is mathematics. When consolidating and a solution is sized properly, servers wind up being put onto a single pool of resources as virtual machines, or VMs, which require either the same or slightly less resources than their physical counterparts because all resources are used as a pool. The math problem comes from something call “over subscription” whereby a data center manager or administration team will intentionally add “too many” hosts for the available resources but still create the VMs with the “proper” resource sizing known to be needed in the physical world. This is basically a bet that not all the resources on a given host will need to be used at the same time and thus there will always be free resources available. Another part of the math is simply the number of servers now placed on the VM Host.

In a given scenario where 100 physical servers are consolidated down to 2 or 4 VM Hosts, those servers, if not reduced in total number from 100 will all be housed on the VM Host, but then additional services may be needed and result in new servers being created or “spun up” as we say in the biz. With over subscription this activity is almost limitless without safe-guards put in place. (like using a hierarchy of resource pools and assigning rules against them to stem the tide of over-subscription) Due to the capability of being able to oversubscribe though, data centers have a very rich, organically grown sprawl of virtual machines that ultimately increases the burden on IT staff to support such infrastructures.

Without seeing this challenge, you just sprung a trap on yourself. This is a common scenario in which folks who begin to enjoy the ease of use and increased capability find themselves. Servers, everywhere. This means that once servers go from physical to virtual that the DR, backup, and ultimately recovery and management scenarios must change as dramatically as the infrastructure.

Displacement

Another challenge that arises from all this consolidation and capability is more of a people problem. The problem of job displacement. Responsibilities that were once tied to a specific group or person within the IT staff now becomes more of a shared responsibility. The environment is sprawling with virtual server fauna if you will and everyone who works in the environment has to know more about what’s going on with each person involved in IT. Once this happens though and you now have a broad set of resources with which to manage the everything, the next step is often reduction of those resources because IT staff has become commoditized to a degree, depending on the level of specialization within the infrastructure of course.

Adopting this paradigm is a huge mistake.

The resources you have before you virtualize can be readily reassigned to support individual or groups of  business units, improve service times, increase profits, prevent outages, and many other crucial IT related functions. What often occurs though is the numbers look so good after consolidating is that someone in upper management often suggests saving “just a little more capital or OpEx” cost by headcount reduction. It’s a short-term win but a long-term loss since it is inevitable that the good ones will leave, and the mediocre will stay, and without significant improvement in human capital (i.e. training) businesses will often suffer from brain drain and their new found freedom, flexibility, and competitive edge will have been reduced significantly. Math strikes again.

Cloud … Could I?

Profit center, cost center, how about competitive center? Many companies have developed a way of utilizing virtualization as more than a means to an end, but rather an on-going process to accomplish many different kinds of goals and employ more than one kind of virtualization for the right reason to avoid putting all their eggs in one basket as a company but also to remain nimble in the face of an ever changing landscape. The hype calls it Cloud, I call it ubiquity. Omnipresent computing infrastructure that never goes down, never breaks, and never needs maintenance at a general level which also *never* shows signs of aging, bottlenecks, has unlimited capability for resiliency and scalability. That is this so called “cloud.” It doesn’t exist, anywhere, but we do have parts of it and someday soon, we might actually see a system which accomplishes this Omnipresent computing. One key component of that infrastructure is Automation. Virtualization infrastructures allow for this type of automation but on  the small scale without lots of human capital to manage it. Once *simple* automatic deployment and provisioning capabilities are widely available and *very easily* integratable regardless of the virtualization infrastructure, then we may get closer to this “Cloud.”

You can use virtualization today to garner in a new era of computing and that is the virtualization strategy you need to adopt in order to provide the level of service the needs of each business unit. First work on the Physical-to-Virtual migration and decide what services will get consolidated down to which virtual machines and keep the relative number of machines low but scale up their virtual beefiness. This will allow a greater degree of flexibility without the need for massive recovery infrastructure jobs, etc. Keeping the relative number of machines low will improve responsiveness to applications which reside on them as well as provide simplicity in management and allow easy use of technologies such as snapshots.

Next prepare for scalability by adopting a lego-block type of approach when budgeting for and building out the infrastructure. This will help to maintain the current resource utilization curve the environment has as well as keep performance in check since bottlenecks are well known after the P2V migration exercise you just performed. Use 10Gbps network connections where possible and try to keep the hardware footprint to a minimum to avoid large  amounts of capital asset depreciation.

(i.e. servers are commodity usually if built small enough, storage might even be the same way depending on the contents of your “lego-block.”)

The Kitchen Sink

Don’t forget Backup, DR, and remote access capabilities and technologies and process updates. As I already outlined, these are usually crucial and if you downsize physical servers and increase your capability you will rely on Backup, Recovery, DR and Remote Access a ton more than you ever did before. No consoles to log on, just remote terminals. Might as well find a way to add this to your arsenal of company competition capabilities. (say that five times fast)

Don’t forget the user-base. With all these changes they might start feeling like you don’t care about their needs and complain that “IT doesn’t listen to us” or that “IT is so stupid. They just took X server offline and that’s where Y app was” etc. User acceptance testing , UAT, is often critical in determining how successful IT is performing for business related goals, especially when you’re talking about a large effort like consolidation. Users are fickle and finicky, but if you listen to and include them in large transitions like this they might surprise you by actually helping IT efforts when virtualizing, consolidating, and recovering.

Don’t forget where your headed. Keeping to the plan once it is in progress is important and anyone will tell you that scope-creep is a huge problem, but so is simply losing focus on the end goal:

Transforming your existing environment from a cost center to a competitive center.

Keeping your focus and communicating through the transition as well as your long-term goals and strategy as a business unit like all the rest will help you get there, save the day once in awhile, and allow you to increase budget. This is of course so long as  you can show how the strategy allows the business to make more profit. Doing this will make everyone happy, and keep everyone else off your tail.

Don’t forget the desktops either! Virtualizing desktops is a great win for a company that wants to virtualize infrastructure but has to be done as a discreet process either before or after the major infrastructure changes for two reasons. VDI as it is know, can either cause the changes or be a product of the changes I’m writing about. How you proceed into that area is up to you but DR will get a whole lot easier if you do go down the VDI path.

Finally

We’re not talking keeping up with the Jones’s, we’re talking about going someplace the Jones’s aren’t even going. IT that helps the company make money because it is truly integrated into every facet of what the company is doing, where it makes sense of course, and introducing a level of resiliency, flexibility, and nimbleness that leaves a company moving ever forward and not stagnant and aging. It does mean a lot of changes, but it can be done, it takes time, effort, a lot of effort, even more effort, money, some more effort, a huge amount of communication, consulting time, buy-in, dedication and commitment, and good people to manage all those changes and provide the business with a good comfort level.

Did I mention effort? It can take months to roll out a highly virtualized environment in a large enterprise, so it should be looked at as a process, but just because it takes less time in a smaller environment doesn’t mean the same methodology shouldn’t apply. It does.

Plan, prepare, deploy, manage, upgrade, scale and repeat.

After all this one day you’ll ask yourself why you didn’t virtualize sooner.