High Availability and Fault Tolerance Part Two

In my last post on High Availability and Fault Tolerant servers (HA/FT) we talked a little bit about redundant power, meaning you have more than one source of electricity to run your servers. But there are numerous other internal threats that can cause unplanned server outages.

After backup power the next level of redundancy comes in your servers themselves. Most server class machines have numerous redundant components built right in such as hard drives and power supplies. This means that right off the shelf, these systems have some level of Fault Tolerance (FT) built in. This can keep application and data available when a component fails. However there are still numerous threats that can cause unplanned outages. This happens when non-redundant components fail, or when multiple components fail.

Remember that High Availability means that if a virtual or physical machine goes down, it will automatically restart and come back online. Fault Tolerance means that multiple components can fail with no loss of data and no interruption of application availability.

To take HA/FT to a higher level we can turn to one of several products available on the market. Products from companies like Vision Solutions (Double Take) can provide software that allows you to create a stand-by server. More sophisticated products from VMware and Stratus allow you to mirror applications and data on identical servers using a concept known as lock-step. Lock-step means that applications and data are being processed in real time across two hosts. With these products multiple components or an entire server can fail and your applications continue to be available to users.

With Double Take Software from Vision Solutions, IT staff can create a primary and standby server pair that replicates all of your data to a stand by server in real time. This is a sufficient solution for most small to medium enterprises. However, if the primary server fails, there is still a brief interruption in application availability while the failover to the standby server occurs. In special situations that require the highest levels of High Availability and Fault Tolerance we turn to solutions from VMware or Stratus. This provides a scenario where multiple components can fail on multiple servers and your application will continue to run.

Determining which approach is right for you is really an economic decision based on the cost of downtime. If you can’t put a dollar value on what it costs your business per hour or per day when a critical application is unavailable, then that application probably isn’t sufficiently critical for you to spend a lot of money on an HA/FT solution. If you do know what that cost is, then, just like buying any other kind of business insurance, you can make a business decision as to how much money you can justify spending to protect against that risk of loss.

eDiscovery Part 2 – PST Files vs. Exchange Archiving

This is the second in a series of blog posts on eDiscovery, which will include video excerpts from the presentation we made at the O365 Nation Fall Conference held in Redmond last month. In Part 1 of this series, we discussed the lifecycle of an Exchange email message, what the “Recoverable Items” folder is all about, and the role of the “Single Item Recovery” feature in Microsoft Exchange.

In this segment, we discuss PST files – why you may not want people using them, how to prevent their use, and the archiving functionality that is built into Exchange 2010 and 2013 and why it’s a better option.

How Do You Back Up Your Cloud Services?

I recently came across a post on spiceworks.com that, although it’s a couple of years old, makes a great point: “IT professionals would never run on-premise systems without adequate backup and recovery capabilities, so it’s hard to imagine why so many pros adopt cloud solutions without ensuring the same level of protection.”

This is not a trivial issue. According to some articles I’ve read, over 100,000 companies are now using Salesforce.com as their CRM system. Microsoft doesn’t reveal how many Office 365 subscribers they have, but they do reveal their annual revenue run-rate. If you make some basic assumptions about the average monthly fee, you can make an educated guess as to how many subscribers they have, and most estimates place it at over 16 million (users, not companies). Google Apps subscriptions are also somewhere in the millions (they don’t reveal their specific numbers either). If your organization subscribes to one or more of these services, have you thought about backing up that data? Or are you just trusting your cloud service provider to do it for you?

Let’s take Salesforce.com as a specific example. Deleted records normally go into a recycle bin, and are retained and recoverable for 15 days. But there are some caveats there:

  • Your recycle bin can only hold a limited number of records. That limit is 25 times the number of megabytes in your storage. (According to the Salesforce.com “help” site, this usually translates to roughly 5,000 records per license.) For example, if you have 500 Mb of storage, your record limit is 12,500 records. If that limit is exceeded, the oldest records in the recycle bin get deleted, provided they’ve been there for at least two hours.
  • If a “child” record – like a contact or an opportunity – is deleted, and its parent record is subsequently deleted, the child record is permanently deleted and is not recoverable.
  • If the recycle bin has been explicitly purged (which requires “Modify All Data” permissions), you may still be able to get them back using the DataLoader tool, but the window of time is very brief. Specifically how long you have is not well documented, but research indicates it’s around 24 – 48 hours.

A quick Internet search will turn up horror stories of organizations where a disgruntled employee deleted a large number of records, then purged the recycle bin before walking out the door. If this happens to you on a Friday afternoon, it’s likely that by Monday morning your only option will be to contact Salesforce.com to request their help in recovering your data. The Salesforce.com help site mentions that this help is available, and notes that there is a “fee associated” with it. It doesn’t mention that the fee starts at $10,000.

You can, of course, periodically export all of your Salesforce.com data as a (very large) .CSV file. Restoring a particular record or group of records will then involve deleting everything in the .CSV file except the records you want to restore, and then importing them back into Salesforce.com. If that sounds painful to you, you’re right.

The other alternative is to use a third-party backup service, of which there are several, to back up your Salesforce.com data. There are several advantages to using a third-party tool: backups can be scheduled and automated, it’s easier to search for the specific record(s) you want to restore, and you can roll back to any one of multiple restore points. One such tool is Cloudfinder, which was recently acquired by eFolder. Cloudfinder will backup data from Salesforce.com, Office 365, Google Apps, and Box. I expect that list of supported cloud services to grow now that they’re owned by eFolder.

We at ManageOps are excited about this acquisition because we are an eFolder partner, which means that we are now a Cloudfinder partner as well. For more information on Cloudfinder, or any eFolder product, contact scott@manage-ops.com, or just click the “Request a Quote” button on this page.

DataCore Releases “State of Software Defined Storage” Report

Last week, we wrote about DataCore’s recent release of v10.0 of their flagship SANsymphony-V software-defined storage product. The features and functionality in the new release were, no doubt, driven in part by the research DataCore has done, as reflected in their recently released fourth annual survey of global IT professionals – conducted to identify current storage challenges and the forces that are driving demand for software-defined storage. Here are some of the highlights from that survey, which can be downloaded in its entirety from the DataCore Web site:

  • 41% of respondents said that a primary factor that impedes organizations from considering different models and manufacturers of storage devices was the plethora of tools required to manage them.
  • 37% of respondents said that the difficulty of migrating between different models and generations of storage devices was also a major impediment.
  • 39% of respondents said that these issues were not a concern for them, because they were utilizing independent storage virtualization software to pool different devices and models of storage devices from different manufacturers and manage them centrally.
  • Despite all the talk about the “all flash data center,” more than half of the respondents (63%) said that they currently have less than 10% of their capacity assigned to flash storage.
  • Nearly 40% of respondents said they were not planning to use flash or SSDs for server virtualization projects because of cost concerns.
  • 23% of respondents ranked performance degradation or the inability to meet performance expectations as the most serious obstacle when virtualizing server workloads; 32% ranked it as somewhat of an obstacle.
  • The highest-ranking reasons that organizations deployed storage virtualization software were the improvement of disaster recovery and business continuity (32%) and the ability to enable storage capacity expansion without disruption (30%).

DataCore is a pioneer and market leader in software-defined storage. Read more about DataCore and ManageOps at www.ManageOps.com/DataCore.

Is It Time to Upgrade Your DataCore SANsymphony-V?

A few months ago, DataCore released SANsymphony-V 10.0. If you’re running an earlier version of SANsymphony-V, there are several reasons why you might want to start planning your upgrade. There are some great new features in v10, and we’ll get to those in a moment, but you should also bear in mind that DataCore’s support policy is to support the current full release (v10) and the release just previous to the current full release (v9.x). Support for v8.x officially ends on December 31, 2014, and support for v7.x ended last June.

That doesn’t mean DataCore won’t help you if you have a problem with an earlier version. It does mean that their obligation is limited to “best effort” support, and does not extend to bug fixes, software updates, or root-cause analysis of issues you may run into. So, if you’re on anything earlier than v9.x, you really should talk to us about upgrading.

But even if you’re on v9.x, there are some good reasons why you may want to upgrade to 10.0:

  • Scalability has doubled from 16 to a maximum of 32 nodes.
  • Supports high-speed 40/56 GbE iSCSI, 16 Gb Fibre Channel, and iSCSI target NIC teaming.
  • Performance visualization/heat map tools to give you better insight into the behavior of flash and disk storage tiers.
  • New auto-tiering settings to optimize expensive resources like flash cards.
  • Intelligent disk rebalancing to dynamically redistribute the load across available disks within a storage tier.
  • Automated CPU load leveling and flash optimization.
  • Disk pool optimization and self-healing storage – disk contents are automatically restored across the remaining storage in the pool.
  • New self-tuning caching algorithms and optimizations for flash cards and SSDs.
  • Simple configuration wizards to rapidly set up different use cases.

And if that’s not enough, v10 now allows you to provision high-performance virtual SANs that can scale to more than 50 million IOPS and up to 32 Petabytes of capacity across a cluster of 32 servers. Not sure whether a virtual SAN can deliver the performance you need? They’ll give you a free virtual SAN for non-production evaluation use.

Check out this great overview of software-defined storage virtualization:

eDiscovery Part 1 – Lifecycle of an Email Message

Last Friday, September 26, ManageOps was invited to present at the O365 Nation fall conference in Redmond on the subject of eDiscovery and Organizational Search in Microsoft Office. O365 Nation is a new organization created by our long-time friend Harry Brelsford, the founder of SMB Nation, and, as you might expect, most of the content at the conference was related to Office 365. However, since the eDiscovery and Search tools in question are built into Exchange, SharePoint, and Lync, the subject matter of our presentation is equally applicable to on premises deployments of these products.

This is the first of a series of blog posts on this topic, which will include video excerpts from the presentation.

It is important to note that the Microsoft tools discussed here only cover a portion of the Electronically Stored Information (“ESI”) that an organization may be required to produce as part of a discovery action. ESI can include Web content, social media content, videos, voice mails, etc., in addition to the information contained in email and Lync messages, and SharePoint content. The primary purpose of these tools is to enable you to preserve email, Lync, and SharePoint content in its original form, perform integrated searches across all three platforms – plus file shares that are being indexed by SharePoint, and export the results in an industry-standard format that can be ingested into third-party eDiscovery tools for further processing.

Since, by sheer volume, email is likely to be the largest component an organization will have to deal with, this series will begin with a discussion of the lifecycle of an email message in Microsoft Exchange – specifically, what happens to an email message when the user’s “Deleted Items” folder is emptied, and how we can insure that if a user attempts to modify an existing message, a copy of that message in its original form is preserved.

September 2014 ManageOps Partners

ManageOps signed three new partners to its Cloud Hosting Partner Program in September 2014. A big welcome to:

Equinox IT Services of Orem, UT.

On Demand IT of Vancouver, WA.

A collaboration with ManageOps ensures customers the technology running in a business becomes almost invisible to its users. By becoming a partner, you can keep your current in house or managed services customers who want to move into a cloud based system without having to build your own environment. To learn about our partner program please visit www.ManageOps.com/partners.

How IT Leaders Can Get More Bang for Their Buck

Earlier I wrote about the value proposition for IT Leaders of SMBs to engage in a Prime Vendor Model to satisfy the diverse needs of their departments. That article was based on the principles defined and implemented across many larger enterprises across the US and the world. Much debate and discussion has been had about this topic, with many asking “How can one vendor really meet all of my needs? And if I think there could be one, how do I know which one is the best fit for my firm?” In this article, I will address these two critical questions facing IT Leaders.

First of all, we have been speeding towards the Prime Vendor Model for a number of years when procurement departments started developing an echelon of “Preferred” vendors with whom they were expecting to spend more money year after year. The goal of the procurement department was simple: beat down the vendor’s price to insure the budget numbers were met. This is a way of being from the 20th century, where someone has to win and someone else has to lose. Our firm, and really the avant-garde set of firms like us, is not interested in a 20th century model. We embrace the 21st century models of co-opetition, win/win business scenarios, and loyalty generated by consistently delivering value and exceeding expectations.

The Prime Vendor Model can be constructed a number of ways. The most simple (and thus easiest to implement) is a pre-determined cost sheet for products and services that are purchased year-over-year. The best example is laptops or server purchases. In a Prime Vendor model, the single vendor will provide the required hardware for the same price for a set period 12-24 months. The price should be low enough that the customer feels that the deal is pretty good (it doesn’t have to be the absolute lowest), and the vendor will take on the responsibility of delivering machines in the timeframes required and at the lowest cost the vendor can secure. This scenario requires a consistent purchase pattern and few changes on the product mix in order to operate seamlessly.

The next easiest model requires more transparency: The cost-plus or fixed-margin model. In this model, the vendor shows the customer actual costing numbers from orders and there is a pre-determined margin percentage determined for the duration of the contract. The margin number will obviously be impacted by the make-up of the annual purchases, but when a customer shows their annual spend, the vendor will be ready to have a guaranteed revenue stream for the coming year or two. This scenario allows for a bit more fluidity in purchases (type, number, frequency, etc.) because both sides will know exactly what they are getting.

One of the more advanced models is a SLA-based model. In this model, the vendor guarantees delivery of the services required by the customer and the price up-front, but it is the choice of the vendor how to deliver those services. More advanced contract techniques can be employed here as well, including bonuses and penalties based on SLA performance. This model requires the most trust between the customer and the Prime Vendor and can even be tied to metrics outside of the IT Organization to insure alignment.

Now that we’ve discussed how the business relationship of this model can look, how will an IT Organization know which of their vendors to promote to this “Prime Vendor” position? Remember, it’s not just a guaranteed spend amount, but it’s transparency in intimate business details and requires rigorous planning to provide the Prime Vendor the information necessary to support the business. And the level of involvement of your Prime Vendor in more business conversations will increase if you want the maximum value out of the relationship. Because of these increased expectations on all sides, the Prime Vendor could look very different from firm to firm. There really isn’t a set description for a Prime Vendor, but there is a good framework of expectations that you can use to assess your vendors to determine if any could fit this role, or which could fit the best.

When determining the best “fit” for a Prime Vendor, the critical areas to focus are:

Proven track-record – Measuring the relationship not only by the times the vendor achieved the expected service level, but also by the times when things went sideways (supplier shock, disaster, etc.)

Capabilities match needs of organization – Insure the skills the vendor has (or has access to) match the needs of the organization long-term. It’s better to get high-value skills at a discount with the power of large purchases. The vendor must also get you access to something your firm needs but hasn’t/cannot acquire.

Knowledge of the firm/industry – The Prime Vendor should know your organization very well based on length of time in the relationship or by industry expertise. While neither of these are absolutely required, having a vendor versed in your industry might also gain some contextual and comparative information during the arrangement.

Willingness to engage – Vendors not willing to do business in this manner might have more to hide than you think. And if vendors are not willing to help build new operating models with shared responsibilities, you might re-consider where to spend your firm’s resources.

Once you have chosen to go down the Prime Vendor path, you will also need to consider the impact to your own organization. For instance, the procurement department will not need to be as large if all the business terms are agreed-upon during the contract. Also, your internal planning meetings will need to include representatives from the Prime Vendor so they can get the inside scoop on what to expect for upcoming orders. Your Prime Vendor should also be providing you expertise in things like hardware selection, service delivery, and solution requirements/capabilities.

These are just a few of the things to consider when embracing a Prime Vendor model for your organization. The business case is there, and the benefits are significant for all involved, as long as you and your Prime Vendor are willing to do business in a more open, more transparent, more 21st century way.

Karl Burns is the Chief Strategy Officer at ManageOps. He can be reached at karl.burns@www.manage-ops.com.

ManageOps Cloud Infrastructure NOT Impacted by GNU Bash Vulnerability

On September 24, 2014, a vulnerability in the Bash shell was publicly announced. This vulnerability, also known as “ShellShock,” has been identified by the NIST as CVE-2014-7169. This vulnerability primarily affects GNU/Linux distributions and other products that are based upon those distributions.

ManageOps’s hosting infrastructure primarily consists of Windows servers, which are not subject to this vulnerability. Furthermore, both WatchGuard (whose XTMv firewall appliances we use for our perimeter security) and Citrix (whose NetScaler virtual appliances we use to provide SSL/VPN access to our cloud infrastructure) have now issued statements confirming that these products are not subject to this vulnerability.

We will continue to monitor information on this vulnerability as it becomes available. In the meantime, ManageOps’s cloud hosting customers can be confident that the security of their data is not at risk from this vulnerability.

Part Two: When Should an IT Leader Use a Vendor?

In the IT space, as in all business areas, there is a constant need to do more with less every year. If you are only producing the same amount of revenue, while still requiring the same amount of resources to produce that revenue, you may not be long for this world. I know that sounds harsh, but in this accelerated day of contracted business model lifetimes, you have to keep speeding towards growth else you may be left in the wake by your competition. We had a fun saying: “You don’t have to be the fastest when running from a bear, you only have to be faster than one person.” In business, it’s more like a long distance run where you need stamina, ideas, and sharp elbows to keep tight competition from getting into your lane and taking over your place.

As businesses grow, their needs change. That’s true to support new products and services, to manage more diverse customers, and to constantly improve the internal mechanics of the organization. The old way of doing business looked something like this: A company buys infrastructure one year, but services for the next 5 to get the most out of that hardware. And it may also require new add-ons and additional services during the life of that hardware.

The company of the future will not get handcuffed to assets (depreciated or not) and will instead look for suppliers that can be as dynamic as the business itself. In the old model, companies needed multiple vendors to get the best price. GE was famous for this. They would offer an RFP, have 3-5 down-selected, then take the cheapest price, subtract 30%, and offer that price to the firm they thought could do the best work. It was a great strategy, for GE only, and is not one built for the long-term. But this is still how many SMBs operate, asking for multiple quotes and going for the cheapest. That may have worked in the past, but now with the value of Vendor Management Offices (VMO) and Procurement Departments chasing smaller and smaller margins, SMBs need to adopt the strategy of the larger enterprises and engage in a Prime Vendor Model.

The Prime Vendor Model ( source ) is one that puts the responsibility of the company’s IT spend on both the supplier and the buyer. Shared responsibility, shared benefits. The model is enabled by a service-oriented supplier and a client who wants to get out of the margin squeezing business (zero-sum game) and trust a specialist to deliver value, while sharing in the reward (positive-sum game). A Prime Vendor Model requires new styles of measurement too. Gone will be the days of looking at the cost of every contract/item, and instead will be the day of your supplier providing valuable service that the SMB can depend on, for a reasonable (agreed upon) margin, while lowering the costs (FTE efforts) to identify, spec, solicit RFPs, assess, and finally manage delivery.

When this model is put to the test, a SMB will experience better overall performance from their IT staff because of these key reasons:

1)      Reduced time (FTE) managing the RFP process (identify, spec, solicit RFPs, assess, manage and enforce)

2)      Reduced knowledge required to plan and execute complex services (time spent researching, debating,

3)      Reduced annual spend on IT

It’s the third one that will get the most attention, and may not occur every year, but will occur over the course of a contract. It has been around for larger firms which needed a blend of products services, and strategic advice. ( source ) When we implemented this model for a client, we understood that we would face significant competition on a number of purchases (hardware, contracts, etc.) and that many vendors would “drop their shorts” to try and gain entry into our client’s annual budget. It took a lot of patience and commitment to not be short-sighted, and to trust another organization to deliver on the promises made to each other. Luckily, we had the full backing from Leadership at both organizations, and the annual spend is being reduced. It doesn’t mean the client gets the lowest price point on every purchase, but year-over-year, we are seeing the downward trend. And that’s build a relationship between both organizations that will benefit all of us for years.

At the end of the day, do you want to do business the old way, constantly spending days and days or weeks to define what you think you need, hosting vendor bake-offs, and ultimately having vendors deliver their service to the “T” with little regard for your company after taking a haircut on every deal? Or do you want to do business the way of the future, with limitless skills available at the ready, and the entire stakeholder set committed to year-over-year improvement, and reduced headcount to justify? I choose the latter.

Karl Burns is the Chief Strategy Officer at ManageOps. He can be reached at karl.burns@www.manage-ops.com.