Part 2 – How we approach AntiVirus in Small Business

imageimage

My approach to Small Business Antivirus is multi leveled. I keep Antivirus as simply one component of my data protection strategies under the Security Umbrella. My first strategy in protecting a client or myself for that matter is to take measures to stop viruses and other attacks before they reach our shores. A few years back after several of our clients were attacked by crypto viruses we implemented OpenDNS to all clients. This is a DNS cleansing service that protects your network and mobile devices by filtering out malicious sites. A few years after we implemented OpenDNS they were purchased by Cisco, this validates the value of their product imho. In the years since we implemented it we have not had a single crypto infection or any other virus for that matter. Good news is for the client that did get crypto infections our comprehensive RESTORE strategy had clean copies of their data, we cleaned the crypto and recovered all of their data by the time the next shift started. We succeeded in recovering 100% of the clients data but we still suffered downtime and disruption that we have since eliminated. Next we cleanse inbound email through a couple of different cloud services Mimecast and Office 365 built in tools are our go to products. This means that we have a high level of control over cleansing our mail prior to it ever getting into our inboxes and our systems. Once we have the inbound data stream cleaned up we insure that we have up to date and appropriately implemented firewall in place and some good centralized Antivirus and Malware solutions and we are ready to compute.

Anywhere Access to Business-Critical Apps

Mobile Productivity

Provide Employees with Anywhere Access to Business-Critical Information with ManageOps and Citrix

With today’s mobile workforce, employees in businesses of all sizes and in all industries want – or actually demand – remote access to business-critical apps and data to help them achieve work-life balance. They’re working from home offices, collaborating with customers and colleagues, working with stakeholders in different time zones, and using a variety of networks.

Forward-thinking businesses like yours are stepping up to deliver mobility because they are seeing the value of moving beyond just providing basic network access to providing secure, portable, always-on, always-connected working environments that follow and empower employees with access to business-critical apps regardless of their location, choice of device or connectivity.

According to analyst firm ESG, nearly one-third (32%) of IT professionals believe that mobile devices have become crucial for their organization’s business processes and productivity, while another 55% consider them to be very important.

When asked why their organizations were embracing mobile computing, more than half the respondents identified a desire to increase employee productivity (52%), improve specific business processes (51%), and/or provide employees with access to specific mobile applications and services for various lines of business (51%).

For some industries like banking, financial services, government and healthcare, real-time information enabled by secure mobility can make all the difference in critical decision making – having a drastic impact on revenue, economic decisions or even saving lives.

The end-user experience is of paramount importance in delivering mobile access to business-critical apps and the evolving threat and regulatory landscape is forcing IT organizations to achieve better security. They now need to protect critical business apps and data, maintain internal corporate standards and governance, and meet compliance and policy regulations for their industries.

But all of this is complicated many times over by the fact that employees are accessing apps and business-sensitive data over personally owned devices that are not controlled by IT. This creates security and management challenges for your IT organization as you need to provision, secure and support hundreds or maybe even thousands of devices and their data.

You know this is a necessary step for your business, but how do you make it all work?

App and desktop virtualization solutions from ManageOps powered by Citrix are the answer. Our solution empowers you to deliver the mobile access your employees need to business-critical information while delivering the security your business requires. And it simplifies IT management rather than adding complexity.

 

What is Virtualization?

Virtualization can mean different things depending on who you ask so we are going to take a broad look at what virtualization is, the different forms it comes in, and why it is so popular.

This is going to be pretty basic stuff so if you are looking for more advanced material I promise we will have advanced stuff in future posts.

Virtualization has been getting a lot of buzz the last few years as it moved from being “bleeding edge” technology to becoming an industry standard. You may have even heard that there are lots of benefits to virtualizing your datacenter…but you may not be sure whether it’s for you, how it works, or even what it means.

There are several kinds of virtualization, including server virtualization, storage virtualization, application virtualization, network virtualization, and desktop virtualization. But when most folks talk about virtualization, they’re referring to server virtualization, so that’s what we will cover today.

So, what is server virtualization?  Simply put server virtualization is the technology that is designed to allow multiple (virtual) servers to reside on a single piece of (physical) hardware and share the resources of the physical server – while still maintaining separate operating environments, so that a problem that crops up in one virtual server won’t affect the operation of others that may be running on the same physical “host.” To help explain what this means I’m going to use the house and condo analogy.

Let’s say you’re a land developer and you build residential property. You cut your land into smaller plots and build one house per plot. As part of the land development, you need to bring in all the utilities from the main street to each and every plot. All of this development costs money.  To make matter worse you know that your city’s population is growing, you’re running out of land to build on, and you also need to control the spiraling costs of building materials. How do you cut cost and provide more homes for a growing population on a limited amount of land?

Perhaps instead of building single-family homes and having one resident per plot you start building condominiums that hold several residents each. Now the utilities that are brought in to the condo complex are shared by all the residents and yet no one ever sees the other residents’ bills. You’re making more efficient use of the land you have and not wasting time and money bringing in utilities to each individual house. Plus one yard is easier to take care of than ten yards.

So how does this relate to server virtualization?

Each plot of land is a physical server, the structure you build on that plot is a server “workload” (i.e., Exchange, SQL, file server, print server, etc.), and the city is your data center. The utilities are things like power, cooling, and network connectivity. When there is only one workload per physical server, a lot of space and resources get wasted. It’s common to see only 10-15% (if that) processor utilization on physical servers which run only one operating system and one application.

With server virtualization we can now create several “virtual” servers on one physical piece of hardware – think of the hardware as little “server condos” if you like. Just as you can have one-bedroom, two-bedroom, and three-bedroom units in a single building, you can allocate differing amounts of processing and memory resources to the virtual servers depending on the requirements of each individual workload. Each virtual server can now share the physical resources of the host machine with the other virtual servers and never know that they are sharing. In fact, each virtual server “thinks” it’s running on its own dedicated hardware platform. By doing this you can now utilize 80-90% of the processing power of the hardware you own, and cut down on the total amount of power, cooling, and floor space you need in your data center.

For example (just pulling numbers out of the air), let’s say that you’ve been paying an average of $5K each for servers that would handle a single workload. If you need four of them, that’s $20K in hardware cost. But if you can buy one server for $8 – 10K to virtualize these 4 machines, that’s a significant reduction in hardware cost. And with fewer machines to plug in and keep cool, your savings can be up to 40% on power consumption alone. (Did you know that we’ve now reached the point where, over the service life of a typical new server, it’s going to cost you more to keep it cool than it cost you to buy it?)

Since the virtual servers are all located on one physical box you now have fewer pieces of hardware to maintain – allowing the IT staff to use their time more efficiently. You’ll save space in your data center. You’ll also cut down on the amount of waste (some of it hazardous) that must be recycled or disposed of when your hardware finally reaches its end-of-life.

You’ve also cut down time needed to bring a new server on line. In the past you would have had to acquire the hardware, assemble it, rack it, connect it to the network, install and patch the OS, install and configure the application, test it all, and finally put it into service. Now that the servers are virtual they can be created, configured, and put into production in a few hours as opposed to the weeks it used to take. In some cases, by using templates for commonly-needed workloads, it can take only minutes. This makes for a much more flexible and scalable environment.

So server virtualization can:

  • Cut hardware costs
  • Cut energy costs (for both power and cooling)
  • Cut system maintenance time and costs
  • Create a very scalable and flexible data center
  • Save space
  • Create a more environmentally friendly data center (a.k.a. “green computing”)

These are the main reasons that server virtualization has become an industry standard. According to folks like Gartner, we’ve now reached the point where the majority of new servers placed into service are being virtualized, and the majority of enterprises have made it a standard practice to virtualize all new servers unless there is a compelling reason why a server can’t or shouldn’t be virtualized. Virtualization also makes it easier to implement things like high availability, disaster recovery, and business continuity, but that’s a subject for a future post.

SSL and Certificates – Part 1 of 3

We’ve seen a lot of confusion regarding what SSL certificates are all about – what they are, what they do, how you use them to secure a Web site, what the “gotchas” are when you’re trying to set up mobile devices to synchronize with an Exchange server, etc. So we’re going to attempt, over a few posts, to explain in layman’s terms (OK, a fairly technical layman) what it’s all about. However, before you can really understand what SSL is all about, you need to understand a little bit about cryptography.

When we were all kids, we probably all played around at one time or another with a simple substitution cipher – where each letter of the alphabet was substituted for another letter, and the same substitution was used for the entire message. It may have been done by simply reversing the alphabet (e.g., Z=A, Y=B, etc.), by shifting all the letters “x” letters to the right or left, or by using your Little Orphan Annie Decoder Ring. (The one-letter-to-the-left substitution cypher was famously used by Arthur C. Clarke in 2001: A Space Odyssey to turn “IBM” into “HAL” – the computer that ran the spaceship.)

The problem with such a simple cipher is that it may fool your average six-year-old, but that’s about it – because (among other things) it does nothing to conceal frequency patterns. The letter “e” is, by far, the most frequently used letter in the English language, followed by “t,” “a,” “o,” etc. (If you want the full list, you can find it at http://en.wikipedia.org/wiki/Letter_frequency.) So whichever letter shows up most frequently in your encoded message is likely to represent the letter “e,” and so forth…and the longer the message is, the more obvious these patterns become. It would be nice to have a system that used a different substitution method for each letter of the message so that the frequency patterns are also concealed.

One approach to this is the so-called “one-time pad,” which is nearly impossible to break if it is properly implemented. This is constructed by selecting letters at random, for example, drawing them from a hopper similar to that used for a bingo game. A letter is drawn, it’s written down, then it goes back into the hopper which is again shuffled, and another letter is drawn. This process is continued until you have enough random letters written down to encode the longest message you might care about. Two copies are then made: one which will be used to encode a message, and the other which will be used to decode it. After they are used once, they are destroyed (hence the “one-time” portion of the name). One-time pads were commonly used in World War II to encrypt the most sensitive messages.

To use a one-time pad, you take the first letter of your message and assign it a numerical value of 1 to 26 (1=A, 26=Z). Then you add to that numerical value the numerical value of the first letter of the pad. That gives you the numerical value of the first letter of your cyphertext. If the sum is greater than 26, you subtract 26 from it. This kind of arithmetic is called “modulo 26,” and while you may not have heard that term, we do these kinds of calculations all the time: If it’s 10:00 am, and you’re asked what time it will be in five hours, you know without even thinking hard that it will be 3:00 pm. Effectively, you’re doing modulo 12 arithmetic: 10 + 5 = 15, but 15 is more than 12, so we have to subtract 12 from it to yield 3:00. (Unless you’re in the military, in which case 15:00 is a perfectly legitimate time.) So as we work through the following example, it might be helpful to visualize a clock that, instead of having the numbers 1 – 12 on the face, has the letters A – Z…and when the hand comes around to “Z,” it then starts over at “A.”

Let’s say that your message is, “Hello world.” Let’s further assume that the first ten characters of your one-time pad are: DKZII MIAVR. (By the way, I came up with these by going to www.random.org, and using their on-line random number generator to generate ten random numbers between 1 and 26.) So let’s write out our message – I’ll put the numerical value of each letter next to it in parentheses – then write the characters from the one-time pad below them, and then do the math:


H(8)  E(5)  L(12) L(12) O(15) W(23) O(15) R(18) L(12) D(4)
+ D(4)  K(11) Z(26) I(9)  I(9)  M(13) I(9)  A(1)  V(22) R(18)



= L(12) P(16) L(12) U(21) X(24) J(10) X(24) S(19) H(8)  V(22)

So our cyphertext is: LPLUX JXSHV. Note that, in the addition above, there were three times (L + Z, W + M, and L + V) when the sum exceeded 26, so we had to subtract 26 from that sum to come up with a number that we could actually map to a letter. Our recipient, who presumably has a copy of the pad, simply reverses the calculation by subtracting the pad from the cyphertext to yield the original message.

While one-time pads are very secure, you do have the logistical problem of getting a copy of the pad to the intended recipient of the message. So this approach doesn’t help us much when we’re trying to secure computer communications – where often you don’t know in advance exactly who you will need to communicate with, e.g., a banking site or a typical Internet e-commerce site. Instead, we need something that lends itself to automated coding and decoding.

During World War II, the Germans had a machine that the Allies referred to by the code name “Enigma.” This machine had a series of wheels and gears that operated in such a way that each time a letter was typed, the wheels would rotate into a new position, which would determine how the next letter would be encoded. The first Enigma machine had spaces for three wheels; a later model had spaces for four. All the recipient needed to know was which wheels to use (they generally had more wheels to choose from than the machine had spaces for) and how to set the initial positions of the wheels, and the message could be decoded. In modern terms, we would call this information the “key.”

One of the major turning points in the war occurred when the British were able to come up with a mathematical model (or “algorithm”) of how the Enigma machine worked. Alan Turing (yes, that Alan Turing) was a key player in that effort, and the roots of modern digital computing trace back to Bletchley Park and that code-breaking effort. (For a very entertaining read, I highly recommend Cryptonomicon by Neal Stephenson, in which Bletchley Park and the code breakers play a leading role.)

Today, we have computers that can perform complex mathematical algorithms very quickly, and the commonly used encryption algorithms are generally made public, specifically so that researchers will attack and attempt to break them. That way, the weak ones get weeded out pretty quickly. But they all work by performing some kind of mathematical manipulation of the numbers that represent the text (and to a computer, all text consists of numbers anyway), and they all require some kind of key, or “seed value,” to get the computation going. Therefore, since the encryption algorithm itself is public knowledge, the security of the system depends entirely on the key.

One such system is the “Advanced Encryption Standard” (“AES”), which happens to be the one adopted by the U. S. government. AES allows for keys that are 128 bits, 192 bits, or 256 bits long. Assuming there isn’t some kind of structural weakness in the AES algorithm – in which case it would presumably have been weeded out before anyone who was serious about security started using it – the logical way to attack it is to sequentially use all possible keys until you find the one that decodes the message. This is called a “brute force” attack. Of course, with a key length of n bits, there are 2n possible keys. So every bit that’s added to the length of the key doubles the number of possible keys.

It is generally accepted that the computing power required to try all possible 128-bit keys will be out of reach for the foreseeable future, unless some unanticipated breakthrough in technology occurs that dramatically increases processing power. Of course, such a breakthrough is entirely possible, which is why AES also allows for 192-bit and 256-bit keys – and remember, a 256-bit key isn’t just twice as hard to break as a 128-bit key, it’s 2128 times as hard. (And 2128 is roughly equal to the digit “3” followed by 38 zeros.) Therefore the government requires 192- or 256-bit keys for “highly sensitive” data.

AES uses a symmetrical key, meaning that the same key is used both to encrypt and decrypt the message, just as was the case with the old Enigma machine. In the next post of this series, we’ll talk about asymmetrical encryption systems, and try to work our way around to talking about SSL certificates.