What’s New in DataCore SANsymphony-V 10 PSP2

A few weeks ago, DataCore released Product Service Pack 2 (PSP2) for SANsymphony-V 10. As you know if you’ve followed this blog, DataCore is a leader in Software Defined Storage…in fact, DataCore was doing Software Defined Storage before it was called Software Defined Storage. PSP2 brings some great enhancements to the product in the areas of Cloud Integration, Platform Services, and Performance management:

  • OpenStack Integration – SANsymphony-V now understands the “Cinder” protocol used by OpenStack to provision storage from the OpenStack Horizon administrative interface. That means that from a single administrative interface, you can create a new VM on your OpenStack infrastructure, and provision and attach DataCore storage to that VM. But that’s not all – remember, SANsymphony-V can manage more than just the local storage on the DataCore nodes themselves. Because SANsymphony-V runs on a Windows Server OS, it can manage any storage that you can present to that Windows Server. Trying to figure out how to integrate your legacy SAN storage with your OpenStack deployment? Put SANsymphony-V in front of it, and let DataCore talk to OpenStack and provision the storage!
  • Random Write Accelerator – A heavy transactional workload that generates a lot of random write operations can be problematic for magnetic storage, because you have to wait while the disk heads are moved to the right track, then wait some more while the disk rotates to bring the right block under the head. So for truly random writes, the average latency will be the time it takes to move the heads halfway across the platters plus the time it takes the platters to make half a rotation. One of the benefits of SSDs, of course, is that you don’t have to wait for those things, because there are no spinning platters or heads to move. But SSDs are still pretty darned expensive compared to magnetic disks. By enabling random write acceleration on a volume, write requests will be fulfilled immediately by simply writing the data at the current (or nearest available) head/platter position, then having a “garbage collection” process that goes back later when things are not so busy and deletes the “dirty” blocks of data at the old locations. This can deliver SSD-like speed from spinning magnetic disks, and the only cost is that storage consumption will be somewhat increased by the old data that the garbage collection process hasn’t gotten around to yet.
  • Flash Memory Optimizations – Improvements have been made in how cache reads from PCIe flash cards in order to better utilize what can be a very costly resource.
  • Deduplication and Compression have been added as options that you can enable at the storage pool level.
  • Veeam Backup Integration – When you use Veeam to backup a vSphere environment, as many organizations do, the Veeam software typically triggers vSphere snapshots, which are retained for however long it takes to back up the VM in question. This adds to the load on the VMware hosts, and can slow down critical applications. With DataCore’s Veeam integration, DataCore snapshots are taken at the SAN level and used for the backup operation instead of using VMware snapshots.
  • VDI Services – DataCore has added specific support for highly available, stateful VDI deployments across clustered Hyper-V server pairs.
  • Centralized console for multiple groups – If you have multiple SANsymphony-V server groups distributed among, e.g., multiple branch offices, you no longer have to explicitly connect to each group in turn to manage it. All server groups can be integrated into the same management UI, with delegated, role-based adminstration to multiple individuals.
  • Expanded Instrumentation – SANsymphony-V now has tighter integration with S.M.A.R.T. alerts generated by the underlying physical storage to allow disk problems to be addressed before they become serious.

For more details on PSP2 and the features listed above, you can view the following 38-minute recording of a recent Webinar we presented on the subject:

You Can’t Afford to Ignore Windows 10 Anymore

Over the last several months, I’ve been watching the headlines about Windows 10 go by without really paying a lot of attention to them. Perhaps you have as well. Win10 fell into the category of “Things I Need to Look More Closely At When I Have Time.” After all, it hasn’t been that long since I upgraded to Windows 8.1. Then the news broke that “general availability” will be July 29, leading to one of those “Wait…what!?” moments. With the release now less than two months away, I realized I needed to make time.

So I bought another 8 Gb of RAM for my 64-bit Fujitsu laptop (blowing it out to a total of 12 Gb, woo hoo!), installed Client Hyper-V (which was amazingly easy to do in Windows 8.1 Enterprise), signed up for the Microsoft Insider Preview program, downloaded the Win10 ISO image, and built myself a Win10 VM.

My initial reaction is that it looks pretty good. The current preview build (10074) looks stable, seems to run everything that I’ve thrown at it, and my complaints are pretty minor. I can’t really test multimedia performance, as the preview build doesn’t have drivers that will allow audio pass-through from my Win10 VM to my host PC, but that’s not surprising at this point.

The Start menu is definitely a step in the right direction, but still doesn’t have that one piece of functionality that drove me to install Stardock Software’s Start8 utility: I love being able to click on the Start button, mouse up to, say, the Word or Excel icon, and immediately see the last several documents/spreadsheets I’ve opened, so I can jump directly to them. In Win10, if I pin, say, Word to my taskbar, I can right-click on the Word icon and see a list of recent files – but my personal preference is to reserve my taskbar for programs I’m actually running rather than taking up space with icons for programs I might want to run. Instead, I use the QuickLaunch toolbar for quick access to programs. (What – you didn’t know you could have a QuickLaunch toolbar in Windows 8.x? You can, and it works in the Win10 build I’m running as well, but that’s a subject for another post.) So, when Stardock releases a version for Win10, I’ll probably upgrade to it.

Speaking of upgrades, you’ve probably also heard that users who are running Windows 7, 8, or 8.1 will get a free upgrade to Windows 10. That’s true, depending on what version you’re currently running. There is an upgrade matrix at www.thurrott.com that tells you, for the Home and Pro editions, what version of Win10 you’ll get. And if you’re running Win7 SP1 or Win 8.1 S14, you can get the upgrade pushed to you via the Windows Update function.

Windows Enterprise users will not get free upgrades…apparently the rationale is that most Windows Enterprise users are part of, well, large enterprises that typically have a corporate license agreement with Microsoft that entitles them to OS upgrades anyway, and these enterprises also want to have tighter control over who gets what upgrade and when.

There are also a few other caveats to bear in mind. First, if you’re running Win7 SP1 or later, the chances are pretty good that your system will run Win10 without any problems…but “pretty good” doesn’t mean “guaranteed.” There’s a helpful article over on ZDNet that will walk you through how to find Microsoft’s compatibility-checking utility.

You may also be surprised at the things Windows 10 will remove from your system as part of the Win10 upgrade.

And bear in mind that if you just happily accept the automatic upgrade to Win10, you’re also opting in for all new features, security updates, and other fixes to the operating system for “the supported lifetime” of your PC. These will all be free, but you won’t have a choice as to which updates you do or don’t get – they’ll all be pushed to you via Windows Update. Businesses, whether running Windows Pro or Enterprise, will have more control over how and when new features and fixes roll out to their users, as Mary Jo Foley explains over on ZDNet.

Finally, Ed Bott is maintaining a great Win10 FAQ over on ZDNet that he’s been updating regularly as more information becomes available. You might want to bookmark that one and come back to it occasionally.

I confess that I’m kind of excited about the new release, and I’ll probably upgrade to it as soon as the Win10 Enterprise bits show up on our Microsoft Partner portal. It will be interesting to see how these major changes in how the Windows OS will be distributed and updated will play out over time. How about you? Feel free to share your thoughts in the comments below…

Trend Micro Releases Q1 Security Update Report

Trend Micro has released their 2015 Q1 Security Roundup report. It makes for some interesting reading. While we recommend that you click through and read the full report for yourself, here are some of the main points:

  • None of the prominent threats in the early part of 2015 were new…yet they were still effective. That suggests we still have a lot of work to do in educating our users about what not to do in order to stay safe online.
  • Malvertising” was a major issue. It’s particuarly dangerous in that it doesn’t require people to actually click on a link – the malware is downloaded when an online ad is displayed.
  • A lot of metaphorical rocks have been thrown at Apple over the (sometimes long and drawn-out) vetting process for getting an app listed in their app store. But the dangers of not having a thorough vetting process were demonstrated once again when mobile attackers were able to slip disguised adware into Google Play ™.
  • “Crypto-ransomware” infection counts increased by a factor of 5x – from 1,540 in 2014 Q1 to 7,844 in 2015 Q1. Some variants directly target enterprises by encrypting files in network shares.
  • Malware in the form of macros for Office apps also increased by almost a factor of 5x – from 19,842 in 2014 Q1 to 92,837 in 20115 Q1. This is an area that clearly demands more user training, as many infections were transmitted via email attachments where the recipients were instructed to enable macros in order to read the attachments.
  • Healthcare data is the “holy grail” of data theft, because it frequently includes social security numbers which are arguably much more valuable to a criminal than a credit card number that can only be used until the card is cancelled.

It’s still a dangerous world out there…surf safely, my friends!

Email Security & Archiving: Achieving Peace of Mind

Nearly 150 billion emails are sent daily. Close to 50% of business email users believe email reduces the day-to-day need for file storage. Now, you may account for only a small portion of that massive number. But, that said … is your data safe? Is downtime in your future?

Undoubtedly, many of those emails have attachments and sensitive information. Many are vulnerable to security breaches. Indeed, this is highly relevant to you. Especially if you share sensitive information.
Consider:
• 88% of companies experience data loss; email’s the primary culprit
• 78% of employees access personal email from business computers; that’s double what’s authorized
• 68% of organizations currently don’t use a secure email service or email archiving solution

Nowadays, when it comes to stopping spam, securing email systems, combating rapidly evolving email threats, and keeping computers working at a fast pace, small and medium businesses (SMBs) just don’t have the resources for email protection.

What’s at risk is data loss and downtime. It creates financial and operational burdens that harm your business. So, it’s critical to keep up with continually evolving email security technologies and best practices.
It gives you peace of mind. Read more

MSPs Lead Businesses through the SaaS Maze

Countless businesses trust Managed Service Providers (MSPs) to deploy, manage and support infrastructure solutions both in the cloud and on-premise. The reason is simple. They are the best at eliminating downtime and providing superb user support.

Savvy businesses also recognize the value engaging their MSP in the search and support for the right applications and Software as a Service (SaaS) solutions for their business. Forrester Research says that IT partners aren’t just critical for the successful deployment of SaaS products, but the management aspect is a real value-add.

SaaS is software running on remote hardware that is owned, managed and delivered via the Internet on a pay-for-use or subscription basis. Without purchasing hardware or software, businesses just connect to the cloud. Deployment takes as little as a day.

Fixes and new features are regularly implemented. It does away with obsolescence. There’s also Service Level Agreements (SLAs) that guarantee availability. There’s no commitment beyond the subscription period, so risk is minimal.

SaaS apps are readily scalable. Capacity is on an on-demand basis. There’s no up-front capital expenditure. And, those using SaaS are “greener” because they share computing resources of the MSP.

When an MSP provides businesses with SaaS, it starts with Planning & Design. From there, the steps are Procurement, Deployment, Management and Governance. In essence, the MSP handles the entire service process.

Read more

Get Ready for the End of Your (Business) World

Windows Server 2003 Deadline Nears
Nearly a year ago Microsoft Windows XP support came to an end. Now, we are rapidly approaching Windows Server 2003, aka SBS 2003, end of life.

Are you ready? If you aren’t, what will you do when you wake up the morning of July 15, 2015 … the day after the end of your business world as you know it today? Despite Microsoft warning about end of life for Windows Server 2003 as early as two years before now, many small to medium-sized businesses have yet to begin their migration away from the server platform to a Windows server alternative. Worse yet, many of you are largely unaware of the huge financial costs and security risks if you continue running Windows Server 2003 past the end of life date.

Again, that date is July 14, 2015, when Microsoft will end extended support on all versions of Windows Server 2003/R2, according to the Microsoft Support Lifecycle section. Mark your calendar; set your alarm.
Read more

The Case for Office 365

Update – May 7, 2015
In the original post below, we talked about the 20,000 “item” limit in OneDrive for Business. It turns out that even our old friend and Office 365 evangelist Harry Brelsford, founder of SMB Nation, and, more recently, O365 Nation, has now run afoul of this obstacle, as he describes in his blog post from May 5.

Turns out there’s another quirk with OneDrive for Business that Harry didn’t touch on in his blog (nor did we in our original post below) – OneDrive for Business is really just a front end for a Microsoft hosted SharePoint server. “So what?” you say. Well, it turns out that there are several characters that are perfectly acceptable for you to use in a Windows file or folder name that are not acceptable in a file or folder name on a SharePoint server. (For the definitive list of what’s not acceptable, see https://support.microsoft.com/en-us/kb/905231.) And if you’re trying to sync thousands of files with your OneDrive for Business account and a few of them have illegal characters in their names, the sync operation will fail and you will get to play the “find-the-file-with-the-illegal-file-name” game, which can provide you with hours of fun…

Original Post Follows
A year ago, in a blog post targeted at prospective hosting providers, we said, “…in our opinion, selling Office 365 to your customers is not a cloud strategy. Office 365 may be a great fit for customers, but it still assumes that most computing will be done on a PC (or laptop) at the client endpoint, and your customer will still, in most cases, have at least one server to manage, backup, and repair when it breaks.”

About the same time, we wrote about the concept of “Data Gravity” – that, just as objects with physical mass exhibit inertia and attract one another in accordance with the law of gravity, large chunks of data also exhibit a kind of inertia and tend to attract other related data and the applications required to manipulate that data. This is due in part to the fact that (according to former Microsoft researcher Jim Gray) the most expensive part of computing is the cost of moving data around. It therefore makes sense that you should be running your applications wherever your data resides: if your data is in the Cloud, it can be argued that you should be running your applications there as well – especially apps that frequently have to access a shared set of back-end data.

Although these are still valid points, they do not imply that Office 365 can’t bring significant value to organizations of all sizes. There is a case to be made for Office 365, so let’s take a closer look at it:

First, Office 365 is, in most cases, the most cost-effective way to license the Office applications, especially if you have fewer than 300 users (which is the cut-off point between the “Business” and “Enterprise” O365 license plans). Consider that a volume license for Office 2013 Pro Plus without Software Assurance under the “Open Business” license plan costs roughly $500. The Office 365 Business plan – which gets you just the Office apps without the on-line services – costs $8.25/month. If you do the math, you’ll see that $500 would cover the subscription cost for five years.

But wait – that’s really not an apples-to-apples comparison, because with O365 you always have access to the latest version of Office. So we should really be comparing the O365 subscription cost to the volume license price of Office with Software Assurance, which, under the Open Business plan, is roughly $800 for the initial purchase, which includes two years of S.A., and $295 every two years after that to keep the S.A. in place. Total four-year cost under Open Business: $1,095. Total four-cost under the Office 365 Business plan: $396. Heck, even the Enterprise E3 plan (at $20/month) is only $960 over four years.

But (at the risk of sounding like a late-night cable TV commercial) that’s still not all! Office 365 allows each user to install the Office applications on up to five different PCs or Macs and up to five tablets and five smart phones. This is the closest Microsoft has ever come to per-user licensing for desktop applications, and in our increasingly mobile world where nearly everyone has multiple client devices, it’s an extremely attractive license model.

Second, at a price point that is still less than comparable volume licensing over a four-year period, you can also get Microsoft Hosted Exchange, Hosted SharePoint, OneDrive for Business, Hosted Lync for secure instant messaging and Web conferencing, and (depending on the plan) unlimited email archiving and eDiscovery tools such as the ability to put users and/or SharePoint document libraries on discovery hold and conduct global searches across your entire organization for relevant Exchange, Lync, and SharePoint data. This can make the value proposition even more compelling.

So what’s not to like?

Well, for one thing, email retention in Office 365 is not easy and intuitive. As we discussed in our recent blog series on eDiscovery, when an Outlook user empties the Deleted Items folder, or deletes a single item from it, or uses Shift+Delete on an item in another folder (which bypasses the Deleted Items folder), that item gets moved to the “Deletions” subfolder in a hidden “Recoverable Items” folder on the Exchange server. As the blog series explains, these items can still be retrieved by the user as long as they haven’t been purged. By default, they will be purged after two weeks. Microsoft’s Hosted Exchange service allows you to extend that period (the “Deleted Items Retention Period”), but only to a maximum of 30 days – whereas if you are running your own Exchange server, you can extend the period to several years.

But the same tools that allow a user to retrieve items from the Deletions subfolder will also allow a user to permanently purge items from that subfolder. And once an item is purged from the Deletions subfolder – whether explicitly by the user or by the expiration of the Deleted Items Retention Period – that item is gone forever. The only way to prevent this from happening is to put the user on Discovery Hold (assuming you’ve subscribed to a plan which allows you to put users on Discovery Hold), and, unfortunately, there is currently no way to do a bulk operation in O365 to put multiple users on Discovery Hold – you must laboriously do it one user at a time. And if you forget to do it when you create a new user, you run the risk of having that user’s email messages permanently deleted (whether accidentally or deliberately) with no ability to recover them if, Heaven forbid, you ever find yourself embroiled in an eDiscovery action.

One way around this is to couple your Office 365 plan with a third-party archiving tool, such as Mimecast. Although this obviously adds expense, it also adds another layer of malware filtering, an unlimited archive that the user cannot alter, a search function that integrates gracefully into Outlook, and an email continuity function that allows you to send/receive email directly via a Mimecast Web interface if the Office 365 Hosted Exchange service is ever unavailable. You can also use a tool like eFolder’s CloudFinder to back up your entire suite of Office 365 data – documents as well as email messages.

And then there’s OneDrive. You might be able, with a whole lot of business process re-engineering, to figure out how to move all of your file storage into Office 365’s Hosted SharePoint offering. Of course, there would then be no way to access those files unless you’re on-line. Hence the explosive growth in the business-class cloud file synchronization market – where you have a local folder (or multiple local folders) that automatically synchronizes with a cloud file repository, giving you the ability to work off-line and, provided you’ve saved your files in the right folder, synchronize those files to the cloud repository the next time you connect to the Internet. Microsoft’s entry in this field is OneDrive for Business…but there is a rather serious limitation in OneDrive for Business as it exists today.

O365’s 1 Tb of Cloud Storage per user sounds like more than you would ever need. But what you may not know is that there is a limit of 20,000 “items” per user (both a folder and a file within that folder are “items”). You’d be surprised at how fast you can reach that limit. For example, there are three folders on my laptop where all of my important work-related files are stored. One of those folders contains files that also need to be accessible by several other people in the organization. The aggregate storage consumed by those three folders is only about 5 Gb – but there are 18,333 files and subfolders in those three folders. If I was trying to use OneDrive for Business to synchronize all those files to the Cloud, I would probably be less than six months away from exceeding the 20,000 item limit.

Could I go through those folders and delete a lot of stuff I no longer need, or archive them off to, say, a USB drive? Sure I could – and I try to do that periodically. I dare say that you probably also have a lot of files hanging around on your systems that you no longer need. But it takes time to do that grooming – and what’s the most precious resource that most of us never have enough of? Yep, time. My solution is to use Citrix ShareFile to synchronize all three of those folders to a Cloud repository. We also offer Anchor Works (now owned by eFolder) for business-class Cloud file synchronization. (And there are good reasons why you might choose one over the other, but they’re beyond the scope of this article.)

The bottom line is that, while Office 365 still may not be a complete solution that will let you move your business entirely to the cloud and get out of the business of supporting on-prem servers, it can be a valuable component of a complete solution. As with so many things in IT, there is not necessarily a single “right” way to do anything. There are multiple approaches, each with pros and cons, and the challenge is to select the right combination of services for a particular business need. We believe that part of the value we can bring to the table is to help our clients select that right combination of services – whether it be a ManageOps hosted private cloud, a private cloud on your own premise, in your own co-lo, or in a public infrastructure such as Amazon or Azure, or a public/private hybrid cloud deployment – and to help our clients determine whether one of the Office 365 plans should be part of that solution. And if you use the Office Suite at all, the answer to that is probably “yes” – it’s just a matter of which plan to choose.

Hyperconvergence and the Advent of Software Defined Everything (Part 2)

As cravings go, the craving for the perfect morning cup of tea in jolly old England rivals that of the most highly-caffeinated Pacific Northwest latte-addict. So, in the late 1800s, some inventive folks started thinking about what was actually required to get the working man (or woman) out of bed in the morning. An alarm clock, certainly. A lamp of some kind during the darker parts of the year (England being at roughly the same latitude as the State of Washington). And, most importantly, that morning cup of tea. A few patent filings later, the “Teasmade” was born. According to Wikipedia, they reached their peak of popularity in the 1960s and 1970s…although they are now seeing an increase in popularity again, partly as a novelty item. You can buy one on eBay for under $50.

teasmade

The Teasmade, ladies and gentlemen, is an example of a converged appliance. It integrates multiple components – an alarm clock, a lamp, a teapot – into a pre-engineered solution. And, for it’s time, a pretty clever one, if you don’t mind waking up with a pot of boiling water inches away from your head. The Leatherman multi-tool is another example of a converged appliance. You get pliers, wire cutters, knife blades, phillips-head and flat-head screwdrivers, a can/bottle opener, and, depending on the model, an awl, a file, a saw blade, etc., etc., all in one handy pocket-sized tool. It’s a great invention, and I keep one on my belt constantly when I’m out camping, although it would be of limited use if I had to work on my car.

How does this relate to our IT world? Well, in traditional IT, we have silos of systems and operations management. We typically have separate admin groups for storage, servers, and networking, and each group maintains the architecture and the vendor relationships, and handles purchasing and provisioning for the stuff that group is responsible for. Unfortunately, these groups do not always play nicely together, which can lead to delays in getting new services provisioned at a time when agility is increasingly important to business success.

Converged systems attempt to address this by combining two or more of these components as a pre-engineered solution…components that are chosen and engineered to work well together. One example is the “VCE” system, so called because it is a bundle of VMware, Cisco UCS hardware, and EMC storage.

A “hyperconverged” system takes this concept a step further. It is a modular system from a single vendor that integrates all functions, with a management overlay that allows all the components to be managed via a “single pane of glass.” They are designed to scale by simply adding more modules. They can typically be managed by one team, or, in some cases, one person.

VMware’s EVO:RAIL system, introduced in August of last year, is perhaps the first example of a truly hyperconverged system. VMware has arrangements with several hardware vendors, including Dell, HP, Fujitsu, and even SuperMicro, to build EVO:RAIL on their respective hardware. All vendors’ products include four dual-processor compute nodes with 192 Gb RAM each, one 400 Gb SSD per node (used for caching), and three 1.2 Tb hot-plug disk drives per node, all in a 2U rack-mount chassis with dual hot-plug redundant power supplies.

Update – June 10, 2015
VMware has now given hardware vendors more flexibility in configuring the appliances: They can now include dual six-, eight-, ten-, or 12-core Intel Haswell or Ivy Bridge CPUs per node, 128 Gb to 512 Gb of RAM per node, and an alternate storage configuration of one 800 Gb SSD and five 1.5 Tb HDDs per node.

The hardware is bundled with VMWare’s virtualization software, as well as their virtual SAN. The concept is appealing – you plug it in, turn it on, and you’re 15 minutes away from building your first VM. EVO:RAIL can be scaled out to four appliances (today), with plans to increase the number of nodes in the future.

The good news is that it’s fast and simple, it has a small footprint (meaning it enables high-density computing), and places lower demands on power and cooling. Todd Knapp, writing for searchvirtualdesktop.techtarget.com, says, “Hyperconverged infrastructure is a good fit for companies with branch locations or collocated facilities, as well as small organizations with big infrastructure requirements.”

Andy Warfield (from whom I borrowed the Teasmade example), writing in his blog at www.cohodata.com, is a bit more specific: “…converged architectures solve a very real and completely niche problem: at small scales, with fairly narrow use cases, converged architectures afford a degree of simplicity that makes a lot of sense. For example, if you have a branch office that needs to run 10 – 20 VMs and that has little or no IT support, it seems like a good idea to keep that hardware install as simple as possible. If you can do everything in a single server appliance, go for it!”

But Andy also points out some not-so-good news:

However, as soon as you move beyond this very small scale of deployment, you enter a situation where rigid convergence makes little or no sense at all. Just as you wouldn’t offer to serve tea to twelve dinner guests by brewing it on your alarm clock, the idea of scaling cookie-cutter converged appliances begs a bit of careful reflection.

If your environment is like many enterprises that I’ve worked with in the past, it has a big mix of server VMs. Some of them are incredibly demanding. Many of them are often idle. All of them consume RAM. The idea that as you scale up these VMs on a single server, that you will simultaneously exhaust memory, CPU, network, and storage capabilities at the exact same time is wishful thinking to the point of clinical delusion…what value is there in an architecture that forces you to scale out, and to replace at end of life, all of your resources in equal proportion?

Moreover, hyperconverged systems are, at the moment, pretty darned expensive. An EVO:RAIL system will cost you well over six figures, and locks you into a single vendor. Unlike most stand-alone SAN products, VMware’s virtual SAN won’t provision storage to physical servers. And EVO:RAIL is, by definition, VMware only, whereas many enterprises have a mixture of hypervisors in their environment. (According to Todd Knapp, saying “We’re a __________ shop” is just another way of saying “We’re more interested in maintaining homogeneity in the network than in taking advantage of innovations in technology.”) Not to mention the internal political problems: Which of those groups we discussed earlier is going to manage the hyperconverged infrastructure? Does it fall under servers, storage, or networking? Are you going to create a new group of admins? Consolidate the groups you have? It could get ugly.

So where does this leave us? Is convergence, or hyperconvergence, a good thing or not? The answer, as it often is in our industry, is “It depends.” In the author’s opinion, Andy Warfield is exactly right in that today’s hyperconverged appliances address fairly narrow use cases. On the other hand, the hardware platforms that have been developed to run these hyperconverged systems, such as the Fujitsu CX400, have broader applicability. Just think for a moment about the things you could do with a 2U rack-mount system that contained four dual-processor server modules with up to 256 Gb of RAM each, and up to 24 hot-plug disk drives (6 per server module).

We’ve built a number of SMB virtualization infrastructures with two rack-mount virtualization hosts and two DataCore SAN nodes, each of which was a separate 2U server with its own power supplies. Now we can do it in ¼ the rack space with a fraction of the power consumption. Or how about two separate Stratus everRun fault-tolerant server pairs in a single 2U package?

Innovation is nearly always a good thing…but it’s amazing how often the best applications turn out not to be the ones the innovators had in mind.

Hyperconvergence and the Advent of Software-Defined Everything (Part 1)

The IT industry is one of those industries that is rife with “buzz words” – convergence, hyperconvergence, software-defined this and that, etc., etc. It can be a challenge for even the most dedicated IT professionals to keep up on all the new trends in technology, not to mention the new terms invented by marketeers who want you to think that the shiny new product they just announced is on the leading edge of what’s new and cool…when in fact it’s merely repackaged existing technology.

What does it really mean to have “software-defined storage” or “software-defined networking”…or even a “software-defined data center? What’s the difference between “converged” and “hyperconverged?” And why should you care? This series of articles will suggest some answers that we hope will be helpful.

First, does “software-defined” simply mean “virtualized?”

No, not as the term is generally used. If you think about it, every piece of equipment in your data center these days has a hardware component and a software component (even if that software component is hard-coded into specialized integrated circuit chips or implemented in firmware). Virtualization is, fundamentally, the abstraction of software and functionality from the underlying hardware. Virtualization enables “software-defined,” but, as the term is generally used, “software defined” implies more than just virtualization – it implies things like policy-driven automation and a simplified management infrastructure.

An efficient IT infrastructure must be balanced properly between compute resources, storage resources, and networking resources. Most readers are familiar with the leading players in server virtualization, with the “big three” being VMware, Microsoft, and Citrix. Each has its own control plane to manage the virtualization hosts, but some cross-platform management is available. vCenter can manage Hyper-V hosts. System Center can manage vSphere and XenServer hosts. It may not be completely transparent yet, but it’s getting there.

What about storage? Enterprise storage is becoming a challenge for businesses of all sizes, due to the sheer volume of new information that is being created – according to some estimates, as much as 15 petabytes of new information world-wide every day. (That’s 15 million billion bytes.) The total amount of digital data that needs to be stored somewhere doubles roughly every two years, yet storage budgets are increasing only 1% – 5% annually. Hence the interest in being able to scale up and out using lower-cost commodity storage systems.

But the problem is often compounded by vendor lock-in. If you have invested in Vendor A’s enterprise SAN product, and now want to bring in an enterprise SAN product from Vendor B because it’s faster/better/less costly, you will probably find that they don’t talk to one another. Want to move Vendor A’s SAN into your Disaster Recovery site, use Vendor B’s SAN in production, and replicate data from one to the other? Sorry…in most cases that’s not going to work.

Part of the promise of software-defined storage is the ability to not only manage the storage hardware from one vendor via your SDS control plane, but also pull in all of the “foreign” storage you may have and manage it all transparently. DataCore, to cite just one example, allows you to do just that. Because the DataCore SAN software is running on a Windows Server platform, it’s capable of aggregating any and all storage that the underlying Windows OS can see into a single storage pool. And if you want to move your aging EMC array into your DR site, and have your shiny, new Compellent production array replicate data to the EMC array (or vice versa), just put DataCore’s SANsymphony-V in front of each of them, and let the DataCore software handle the replication. Want to bring in an all-flash array to handle the most demanding workloads? Great! Bring it in, present it to DataCore, and let DataCore’s auto-tiering feature dynamically move the most frequently-accessed blocks of data to the storage tier that offers the highest performance.

What about software-defined networking? Believe it or not, in 2013 we reached the tipping point where there are now more virtual switch ports than physical ports in the world. Virtual switching technology is built into every major hypervisor. Major players in the network appliance market are making their technology available in virtual appliance form. For example, Watchguard’s virtual firewall appliances can be deployed on both VMware and Hyper-V, and Citrix’s NetScaler VPX appliances can be deployed on VMware, Hyper-V, or XenServer. But again, “software-defined networking” implies the ability to automate changes to the network based on some kind of policy engine.

If you put all of these pieces together, vendor-agnostic virtualization + policy-driven automation + simplified management = software-defined data center. Does the SDDC exist today? Arguably, yes – one could certainly make the case that the VMware vCloud Automation Center, Microsoft’s Azure Pack, Citrix’s CloudStack, and the open-source OpenStack all have many of the characteristics of a software-defined data center.

Whether the SDDC makes business sense today is not as clear. Techtarget.com quotes Brad Maltz of Lumenate as saying, “It will take about three years for companies to learn about the software-designed data center concept, and about five to ten years for them to understand and implement it.” Certainly some large enterprises may have the resources – both financial and skill-related – to begin reaping the benefits of this technology sooner, but it will be a challenge for small and medium-sized enterprises to get their arms around it. That, in part, is what is driving vendors to introduce converged and hyperconverged products, and that will be the subject of Part 2 of this series.

Windows Server 2003 – Four Months and Counting

Unless you’ve been living in a cave in the mountains for the last several months, you’re probably aware that Windows Server 2003 hits End of Life on July 14, 2015 – roughly four months from now. That means Microsoft will no longer develop or release security patches or fixes for the OS. You will no longer be able to call Microsoft for support if you have a problem with your 2003 server. Yet, astoundingly, only a few weeks ago Microsoft was estimating that there were still over 8 million 2003 servers in production.

Are some of them yours? If so, consider this: As Mike Boyle pointed out in his blog last October, you’re running a server OS that was released the year Facebook creator Mark Zuckerberg entered college; the year Wikipedia was launched; the year Myspace (remember them?) was founded; the year the Tampa Bay Buccaneers won the Super Bowl. Yes, it was that long ago.

Do you have to deal with HIPAA or PCI compliance? What would it mean to your organization if you didn’t pass your next audit? Because you probably won’t if you’re still running 2003 servers. And even if HIPAA or PCI aren’t an issue, what happens when (not if) the next big vulnerabilty is discovered and you have no way to patch for it?

Yes, I am trying to scare you – because this really is serious stuff, and if you don’t have a migration plan yet, you don’t have much time to assemble one. Please, let’s not allow this to become another “you can have it when you pry it from my cold dead hands” scenario like Windows XP. There really is too much at stake here. You can upgrade. You can move to the cloud. Or you can put your business as risk. It’s your call.