Desktop Virtualization for the SMB
One of the criticisms that’s been leveled at XenDesktop by its competitors is that it is too complex – too many components that have to be configured to get everything to work. And while that’s partially true, it’s not the whole story. As we’ve discussed in previous posts, XenDesktop is extremely flexible in that it allows you to mix and match different kinds of virtual desktops in your environment to best meet the needs of various groups of users. As you bring more kinds of virtual desktops into the mix, you add more infrastructure components to manage them. More infrastructure components = more complexity but also more flexibility.
If you don’t need all that flexibility – if, for example, you just want to deploy “classic” VDI, by which I mean a bunch of virtual PCs running on the hypervisor of your choice – then you don’t need all that complexity, either.
In this video, Dan Feller of Citrix presents a reference architecture for a straightforward VDI deployment of up to 500 users. The video takes about 50 minutes to watch, but it’s worth your time. You’ll learn some interesting things.
For example, you’ll note that Dan is recommending that the XenServers in the XenServer pool that supports the virtual Windows 7 machines should have local disk drives, in a RAID 10 configuration, that will be used for the local host cache for the provisioned Windows 7 systems, for two reasons: First, it’s less expensive than using SAN storage. Second, the limiting factor for how many virtual PCs you will be able to run on a XenServer host is not processing power, and it’s not RAM – it’s IOPS. And he walks you through the calculation of how many functional IOPS the local storage on the XenServer can support, and how many virtual desktops you can therefore reasonably expect to support.
In fact, my only reservation about this video is that, like just about every other discussion I’ve seen regarding Windows 7 virtualization, it doesn’t mention the Microsoft license activation issue that’s inherent in provisioning Vista and Windows 7 desktops, the need for the Microsoft Key Management Service, and the nuances of getting KMS to work properly. But we’ve pummeled that issue elsewhere in this blog.
So, with that in mind, heeeerrrrrreeee’s Dan (P.S.: the audio doesn’t start until about 15 seconds into the video):
Good question, Joe. First of all, if a virtual host machine has a catastrophic failure, the virtual PCs running on that host machine will die along with it. This will be the case with ANYBODY’S VDI technology, and it has nothing to do with whether we’re using a SAN. The question is, what happens next?
And the answer is that the user would log onto another VDI instance on one of your surviving virtual hosts. If they had unsaved data in running applications when the host crashed, that data would be lost, just as it would be lost if a standalone PC crashed unexpectedly. Likewise, if there were unsaved changes to their user profiles, those changes would also be lost, just as they would be if a standalone PC crashed.
In this reference architecture, the focus is on providing HA for the supporting server components. The servers that are running virtually can be protected by using XenServer’s HA functionality. It is presumed that those VMs reside on a SAN, or you wouldn’t be able to do an HA restart.
The Provisioning Servers, which are not virtualized, are redundant. The vDisk files used for provisioning *could* be placed on a SAN, but this introduces another layer of complexity. If you want multiple Provisioning Servers to be able to simultaneously access the same vDisk files, you’re going to need some kind of clustered file system (e.g., Sanbolic’s Melio product) to allow more than one server OS to access the same LUN. Dan’s point in this video is that if you only have a few vDisk images to maintain – which would probably be the case in a deployment of this size – it just isn’t that difficult to manually copy the vDisk files from one Provisioning Server to the other, which means you can use less expensive local storage, and you don’t have to worry about a clustered file system.
Perhaps the confusion is over a misunderstanding of the purpose of having local storage in the XenServers that are hosting the virtual desktops. Since we are using Provisioning Services to boot and run the virtual PCs from read-only vDisk images, we need some place to store the stuff that a Windows PC/OS needs to write to disk during its normal course of operation – things like changes to the user profile, the Windows swap file, etc. What I got from Dan’s presentation here is that using local storage in the XenServers for this “Local Host Cache” is the most cost-effective way to provide high IOPS for the virtual PCs.
To your other question, yes, we are very aware of the Kaviza product. However, if you’re using Kaviza, and a virtual host experiences a catastrophic failure, the VMs on that virtual host will still fail right along with it. So neither Kaviza nor XenDesktop (nor anyone else’s VDI solution) will provide HA in the sense of allowing a virtual PC to survive the failure of its underlying virtual host and continue to run. There are ways to do that (e.g., Marathon Technologies Level 3 “lock step” operation in their everRun 2G product), but they would be prohibitively expensive for any desktop OS requirement I’ve ever seen.
Interesting – how do we get High-availability if we used local storage for the 500 desktops? What happens to the running desktops on a server when it has a catastrophic failure?
My understanding is that the proposed architecture will not provide high-availability since XD relies on a SAN for HA. Have you looked at Kaviza VDI-in-a-box (www.kaviza.com)? They provide High-Availability with local Direct Attached Storage.