Some Straight Talk about VDI-in-a-Box
Update: The advent of solid-state drives allows you to eliminate IOPS as a potential bottleneck. The calculations below are based on 15K SAS drives that support roughly 175 IOPS each. A typical 200 Gb SSD will support tens of thousands of IOPS. On the other hand, although SSD prices are coming down, they’re still rather pricey. Replacing the eight 146 Gb, 15K SAS drives in the example below with eight 200 Gb SSDs, and loading it up with RAM so you can support more virtual desktops, will push the price of the server to nearly $20,000. So the primary point of this post still stands: While VDI-in-a-Box is a great product, and can be competitive with physical PCs when the entire lifecycle cost is compared, you’re just not going to see significant savings in the capital expense of ViaB vs. physical PCs. That doesn’t mean it isn’t a great product, and it doesn’t mean you shouldn’t consider it. It just means that you need to validate what it’s really going to cost in your environment.
Original Post (April, 2012):
There is a lot of buzz about Citrix VDI-in-a-Box (“ViaB”), and rightly so: it’s a great product, and much simpler to install and easier to scale than a full-blown XenDesktop deployment. You don’t need a SAN, you don’t need special broker servers, you don’t need a separate license server or a SQL Server to hold configuration data. Unfortunately, some of the buzz – particularly some of the cost comparisons you see that show a $3,000 – $4,000 server for 30 or more virtual desktops, is misleading. So let’s talk seriously about the right way to deploy ViaB. For this exercise, I’m going to assume we need 50 virtual desktops. Once we’ve worked through this, you should be able to duplicate the exercise for any number you want.
First of all, I’m going to assume that we are building a system that will support Windows 7 virtual desktops – because I can’t see any valid reason why someone would invest in a virtual desktop infrastructure that couldn’t support Windows 7. There are two important data points that follow from this: (1) We should allow at least 1.5 Gb per virtual PC, and preferably 2 Gb per virtual PC. (2) We should design for an average of about 15 IOPS per Windows 7 virtual PC, because, depending on the user, a Windows 7 desktop will generate 10 – 20 IOPS. Let’s tackle the IOPS issue first.
Thanks to Dan Feller of Citrix, we know how to calculate the “functional IOPS” of a given disk subsystem. Here are the significant factors that go into that formula:
- A desktop Operating System – unlike a server Operating System – has a read/write ratio of roughly 80% writes and 20% reads.
- A 15K SAS drive will support approximately 175 IOPS. The total “raw IOPS” of a disk array built from 15K SAS drives is simply 175 x the number of drives in the array.
- A RAID 10 array, which probably offers the best balance of performance and reliability, has a “write penalty” of 2.
With that in mind, the formula is:
Functional IOPS=((Total Raw IOPS x Write %)/(RAID Penalty)) + (Total Raw IOPS x Read %)
If we put eight 15K SAS drives into a RAID 10 array, the formula becomes:
Raw IOPS = 175 x 8 = 1,400
Functional IOPS = ((1,400x.8)/2)+(1,400x.2) = 560 + 280 = 840
If we are assuming an average of 15 IOPS per Win7 virtual PC, this suggests that the array in question will support roughly 56 virtual PCs. So this array should be able to comfortably support our 50 Win7 virtual PCs, unless all 50 are assigned to power users.
That’s all well and good, but we haven’t talked yet about how much actual storage space this array needs. That depends on the size of our Win7 master image, how many different Win7 master images we’re going to be using, and whether we can use “linked clones” for VDI provisioning, in which case each virtual PC will consume an average of 15% of the size of the master, or whether we’re permanently assigning desktops to users, in which case each virtual PC will consume 100% of the size of the master. For the sake of this exercise, let’s assume we’re using linked clones, and that we have three different master images, each of which is 20 Gb in size. According to the Citrix best practice, we need to reserve 120 Gb for our master images (2 x master image size x number of master images). We then need to reserve 3 Gb per virtual PC (15% of 20 Gb), which totals another 150 Gb. The ViaB virtual appliance will require 70 Gb. We also need room for the hypervisor itself (unless we’re provisioning another set of disks just for that) and for swap file, transient activity, etc., so let’s throw in another 150 Gb. That’s 490 Gb minimum. So we need to use, at a minimum, 146 Gb drives in our array, which would give us 584 Gb in our RAID 10 array.
How about RAM? If we allow 1.5 Gb per Win7 desktop, then 50 virtual desktops will consume 75 Gb. We need at least 1 Gb for the ViaB appliance, at least 1 Gb for the hypervisor, plus some overhead for server operations, so let’s just call it 96 Gb.
We can handle 6 to 10 virtual desktops per CPU core – more if the cores are hyper-threaded – so we’re probably OK with a dual-proc, quad-core server.
Now, I don’t know about you, but if I’m going to put 50 users onto a single server, I’m going to want some redundancy. I will at least want hot-plug redundant power supplies, and hot-plug disk drives. Ideally, I would provision “N+1” redundancy, i.e., I would have one more server in my ViaB array than I need to support my users. I’m also going to want a remote access card, and probably an uplift on the manufacturer’s Warranty so if it breaks, the manufacturer will come on site and fix it.
By now, you’ve probably figured out that we are not talking about a $4,000 server here. I priced out a Dell R710 – using their public-facing configuration and quoting tool – with the following configuration, and it came out to roughly $11,000:
- Two Intel E5640 quad-core, hyper-threaded processors, 2.66 GHz
- 96 Gb RAM
- Eight 146 Gb, 15K SAS drives
- PERC H700 controller with 512 Mb cache
- Redundant hot-plug power supplies
- iDRAC Enterprise remote access card
- Warranty uplift to 3-year, 24×7, 4-hour response, on-site Warranty
(NOTE: This is a point-in-time price, and hardware prices are subject to change at any time.)
The ViaB licenses themselves will cost you $195 each. Be careful of the comparisons that show the price as $160 each. ViaB is unique among Citrix products in that the base cost of the license does not include the first year of Subscription Advantage – yet the purchase of that first year is required (although you don’t necessarily have to renew it in future years). That adds $35 each to the cost of the licenses.
Finally, If you don’t have Microsoft Software Assurance on your PC desktops – and my experience is that most SMBs do not – you need to factor in the Microsoft Virtual Desktop Access (VDA) license for every user. This license is only available as an annual subscription, and will cost you approximately $100/year.
So, your up-front acquisition cost for the system we’ve been discussing looks like this:
- Dell R710 server – $11,000
- 50 ViaB licenses @ $195 – $9,750
- 50 Microsoft VDA licenses @ $100 – $5,000
Total aquisition cost: $25,750, or $515/user. Not bad.
But wait – if we’re going to compare this to the cost of buying new PC, shouldn’t we look at the cost of ViaB over the same period of time that we would expect that new PC to last? If we assume, like many companies do, that a PC has a useful life of about 3 years, then we should actually factor in another two years of VDA licenses, and two years of Subscription Advantage renewal for the ViaB licenses. That pushes the 3-year cost of the ViaB licenses to $13,250, and the cost of the VDA licenses to $15,000. So the total 3-year cost of our solution is $39,250, or $785/user.
If you want N+1 redundancy, you’re going to need to buy a second server. That would push the cost to $50,250, or $1,005/user.
What conclusions can we draw from all this? Well, first, that VDI-in-a-Box is not going to be significantly less expensive than buying new PCs, if you actually do it right. However, it is competitive with the price of new PCs, which is worth noting. As long as the price is comparable, which it is, we can then start talking about the business advantages of VDI, such as being able to remotely access your virtual desktop from anywhere, with just about any device, including iPad and Android tablets, and about the ongoing management advantages of having a single point of control over multiple desktops.
Also, as you scale up the environment, the incremental cost of that extra server that’s required for N+1 redundancy gets spread over more and more users, and becomes less significant. For example, if we’re building an infrastructure that will support 150 virtual desktops, we would need four servers. Total 3-year cost: $128,750, or $858.33/user for a robust, highly redundant virtual desktop infrastructure. In my opinion, that’s a pretty compelling price point, and you won’t be able to hit that price point with a 150-user XenDesktop deployment, because of the other server and storage infrastructure components that you need to build a complete solution. On the other hand, XenDesktop does include more functionality, like the rights to use XenApp for virtual application delivery, ability to stream a desktop OS to a blade PC or a desktop PC, rights to use XenClient for client-side virtualization, etc.
But if all you want is a VDI solution, ViaB is, in my opinion, the obvious choice. It’s clear that Citrix wants to position VDI-in-a-Box as the preferred VDI solution for SMBs, meaning anyone with 250 or fewer users…and there’s no reason why ViaB can’t scale much larger than that.
For more information on ViaB, check out this video from Citrix TV, then head on over to the Citrix TV site to view the entire ViaB series…
**** EDIT April 12, 2012 ****
You may already be aware of this, but Dell has announced a ViaB appliance that comes pre-configured, with both XenServer and the ViaB virtual appliance already installed. Oddly enough, even though ManageOps is a Dell partner, I couldn’t get Dell to tell me what one would cost. Their answer was that I should call back when I had a specific customer need, and they would work up a specific configuration and quote it. I considered calling back with a fictitious customer requirement, but decided that I didn’t want to know badly enough to play that game.
They did, however, tell me what the basic server configuration was – and it was very close to the configuration I’ve outlined above: two X5675 processors, 96 Gb of RAM, eight 146 Gb drives in a RAID 10 array, Perc H700 array controller (don’t know how much cache, though), and iDRAC Enterprise remote access card. I do not know whether it has redundant power supplies (although I would certainly hope so), nor exactly what Warranty is included…perhaps that option is left up to the customer.
That gave me at least enough information to run a sanity check on the configuration. The array would provide 960 functional IOPS, which should be adequate for an 80 user system – which is how the appliance is advertised – depending, of course, on the percentage of power users. Also, the array should provide enough storage to handle the needs of most SMBs, unless they have an unusually large number of images to maintain.
One of my Citrix contacts recently told me that the Dell appliance was priced at $440/desktop for an 80 concurrent user configuration, which is very much in line with the cost per user in the post above, considering that $100 of my $515/user number was for the first year of Microsoft VDA licenses, which, to my knowledge, are not included with the Dell appliance.
helpful, real-world scenario. thanks for this article!
cost benifit may not be that significant, but looking at data protection and security, ViaB for SMBs is a good choice,
It’s actually a great and helpful piece of information. I am satisfied that you simply shared this useful info with us. Please keep us up to date like this. Thanks for sharing.
We’re using VDI-in-a-Box with windows 8.1 at the moment. There are 10-15 users working with those machines. Poorly the performance is horrible!
Would you mind sharing what your hardware configuration is for the ViaB server?
Hi thanks for this article!
One simple question:
Citrix VDI in a box is Client-Side VDI right?
So the data is stored on the client?
First off, GREAT write up! Now, can anyone explain why whenever anyone discusses the cost savings of VDI in an environment over using a PC, they do all the math but somehow forget the fact that you still actually NEED a device (like a PC) to access your VDI?
Even if you take the approach that you can get a stripped down unit or some cheap tablet to run the client/connector, there is still a cost associated to that. More than likely the cost of the average low end pc and display. More and more SMB also are going to dual displays.
So taking the advantages VDI brings to the party and placing them aside for the moment, where exactly is all this savings when you still need to by a device?
There is rarely ANY cost savings against a traditional PC – if you doing it poorly to save money, don;t go VDI.
Markos – I’m guessing you meant to say “if you’re doing it purely to save money, don’t go VDI.” I agree that there are rarely any savings in up-front acquisition costs. The savings, if any, tend to be “soft costs” such as the ease of ongoing management, greater control over application deployment and desktop images, workstyle flexibility, etc.
Late last year, we priced out and purchased a full XenDesktop Enterprise configuration using Dell/Wyse Xenith 2 “zero clients” and with 12 x 600 GB SAS drives in a RAID6 configuration., I can realistically push 4,000 IOPS per tray, so not sure how your formula is that close to reality. I guess the environment may make a difference, as well, because the write/read ratio we see is not as high as 4:1.
In addition, the full five-year cost assuming 60 users per server including all licensing, servers, storage, and maintenance, etc. came out to be something like $890 per seat (plus tax, extra!) and after five years, gets even cheaper as the initial license is factored and spread put over the initial 5 years. Granted, we do get an educational discount, but still, not a bad deal/
Our conclusion was — in spite of the added complexity — to go with XenDesktop Enterprise (which includes XenApp) and by leveraging the free XenServer hosting, came out to be a better investment for what you get. Above all, apps is where it’s at, so why limit oneself to a physical PC? BTW, the Xenith 2 units draw about 7 Watts full power, so there’s easily another $50 or so per year savings in electricity and A/C costs.
With RAID 6 configuration, there is no speed improvement for Writes. Considering a Virtual Desktop I/O profile is more Write intensive. This is where RAID 10 has an advantage over other RAID types and redundancy, although there is a 50% penalty for disk space, the Write performance improves x1 for every additional pair of disks (spindles) added to the RAID array.
Is anyone still at the party? 🙂
“Total aquisition cost: $25,750, or $515/user. Not bad.”
Have you taken into account the the desktop hardware to access the VDI?
Another late party arrival 🙂 Read your article above with great interest. It would be nice to get your views by a comparison: cost, features, benefits etc for SMBs of ViaB to MokaFive (MokaFive.com)
I’d love to oblige, but I have no experience with MokaFive, so anything I said about them would be purely speculative…
Late to the party on this article, but you hit the nail right on the head. We actually deployed a 400+ station ViaB solution at my college and it was very successful. We’re running a similar config to what you outlined, with SSD drives being the only major difference. In our environment we are not storing any user data, as these are open labs, and running a variety of software from Office to Solidworks. One cost you didn’t touch was the physical network. Depending on the size of your environment, this cost could become substantial. Also you talked about server costs + license costs + VDA costs, I didn’t see mention of client costs.
In addition for cost savings there is the possibility of re-purposing existing thick clients and, depending on your licensing structure/version of windows, that may save you significantly on VDA.
Thanks for the comment, Hsiawen. We have discussed client costs in other posts. In fact, a little over a year ago, we specifically addressed Thin Clients vs. Cheap PCs (https://www.manage-ops.com/blog/thin-clients-vs-cheap-pcs), and pointed out that PCs have become SO cheap that in some cases it can be as cost-effective (or even more cost-effective) to buy a cheap PC and then buy Software Assurance coverage for it as it is to buy a thin-client and then have to buy the VDA license for it. When PCs can be had for less than $500, they arguably become throw-away items. Now, to be fair, there are indirect cost savings to be had from thin-clients: they consume far less power, have a longer MTBF, survive better in hostile environments (e.g., dusty factory floors) because they have no fans to suck dust and debris inside the case, tend to be immune to malware, etc. But in terms of straight acquisition cost, it’s getting tough to make the case for thin-clients.
I have a small business (7 to 10 employees) with three locations across California. ViaB looks like a good solution over remote desktp which is way to slow. Any suggestions?
Have you considered using SSD drives instead of SAS? If so, what disadvantages will such a system have?
Sergey – SSD drives would certainly deliver a LOT more IOPS. The only disadvantage I see at present is the cost, which is coming down, but is still pretty high compared to 15K SAS drives.