Fixing The Most Problematic Backup Methods

Fixing The Most Problematic Backup Methods

August 13

Most companies understand the importance of data backup, but when it comes down to it, some businesses put in the bare minimum when it comes to actually implementing it. With that in mind, we’d like to explore some of the most pathetic, sad, untrustworthy, or inefficient backup methods we’ve encountered, and how to fix them.

Problem method: No method

First, let’s point out that the absolute worst backup methods for businesses are the ones they don’t have. We know this not only because it’s obvious, but because about a year ago we asked nearly 400 IT providers about hardware failure. Not surprisingly, 99 percent of them had experienced some form of hardware failure, and 71 percent had worked with clients during a major data loss. Hardware failure and data loss are extremely common, and not backing up crucial data is a huge mistake.

Solution: Backups

Regardless of your specific needs for backup and recovery, you definitely need to protect yourself with some form of backups. The methods on this list are ones you should avoid, but something is certainly better than nothing. The safest way to protect data is to keep a copy of your crucial data onsite, and a copy of your data offsite, whether your secondary copy is in the cloud or on an encrypted drive in a safe at home. Redundancy is crucial if you hope to retain your most important data.

Problem method: Taking data offsite

I recently spoke with an IT provider who specialized in servicing the dental industry. He was asking a new client a few questions as he on-boarded them for IT managed services. The first question he had for them was: do you take backups, and if so, how? They said yes, they have backups, but they also confessed that they were copying and pasting their most important files to a USB drive and taking the USB offsite. Yikes.

The problem with this method is two-fold. First, these are sensitive medical records that are being placed on a USB drive, which could easily be lost or stolen. Plus, the Health Insurance Portability and Accountability Act (HIPAA) requires files like these to be encrypted, but in this case, they weren’t. Lost medical records means compliance infringements, which means a business can face massive fines up to $1.5 million, not to mention a huge privacy violation for patients.

Second, this USB can only hold a few files. If a major equipment failure happened at their office, all they would have are these few files and no way to quickly get business back on track.

Solution: Encrypted, Hot-swappable drives

StorageCraft works with a handful of hardware manufacturers. A number of these, such as CRU and Highly Reliable Systems, provide hard drive technologies that allow businesses to have a backup stored at the office, while they take another identical backup offsite—a sort of poor-man’s cloud. Many of these have built-in encryption technologies, though any backup software worth its salt will have military-grade encryption standards built in.

Using these two technologies together, you can have a backup that stays onsite, a backup that goes offsite, full encryption, and a full system image—not just a few files. This keeps medical records safe and is a crucial piece of HIPAA compliant, plus you can get back on track following any emergency.

Check out this case study to learn more about hot-swapping with CRU hardware and the StorageCraft Recover-Ability Solution.

Problem method: Tape backups

Sure, we hear about the benefits of tape every day, but there are really only two: cost and longevity. Tape doesn’t cost much and they can last a very long time—up to twenty years in some cases, which makes them useful for archiving. These are benefits to consider, but let’s look at what tape doesn’t have.

First, tape is linear. There’s no random access to files and folders. Second, tape is slow. You can’t quickly record changes made between now and your last backup, which means you might only take backups once a day. Taking backups only once a day will really affect your recovery point objectives, and since recovering tape backups can be a slower process, your recovery time objectives might be limited as well.

Solution: Disk-based backups

The cost per GB of hard disk drive storage has been on a steady decline for some time. It may not match the cost of tape quite yet, but disks are becoming inexpensive and I doubt anyone would balk at the meager additional costs involved in disk backups compared to tape, especially when compared to the benefits.

A good image-based backup gives you not only instant access to files and folders in your backup, but the ability to quickly recover entire systems quickly.  Not only that, but backup products like StorageCraft ShadowProtect can allow you to schedule backups as often as every fifteen minutes, so you can record any new changes in your backup as often as you like. This way you can set recovery point objectives that fit your needs and recovery time objectives that can allow you to be back on track very quickly following small or large disasters.

To top it all off, you can test that a ShadowProtect backup is working without much effort at all, which leads us to our next section.

Problem method: Taking backups, never testing

Let’s say you did the smart thing and started taking image-based backups that you’re storing on a local NAS device. That’s great, now you’re protected right? Not necessarily. To illustrate what I mean, let’s look at a concept that was introduced in an interesting Spiceworks article about a year ago: Schrödinger’s backup.

Many of us are familiar with Schrödinger’s cat, an interesting thought experiment in the quantum mechanics field. The simplified version is this: a cat is in a box along with a flask of poison and some other stuff (full detail here). The flask breaks, releasing the poison, which kills the cat. There’s a period where we can’t know if the cat is alive or dead without looking in the box, so until we check on the cat, it can be considered both alive and dead since we simply can’t know for sure without looking. The concept of Schrödinger’s backup, on the other hand, posits that “The condition of any backup is unknown until a restore is attempted.”

Solution: Testing

If you don’t test your backups, you can’t know if they will work or not. That’s why you’ve got to do some testing. Testing a backup with software like ShadowProtect is as simple as mounting a backup image as a drive letter and attempting to recover a file. This takes only a few clicks to accomplish. Or for an even more reliable testing method, you can test the backup by spinning it up as a virtual machine using VirtualBoot. This allows you to actually see that a backup will run as a virtual machine. If it can, you’ll undoubtedly be able to use it for a full restore.

Problem method: Backing up to the cloud

There are a number of ways to backup things in the cloud, whether you’re talking about file and folder methods like Dropbox, Box, Google Drive, or more robust methods that can allow you to backup and recover full systems at the drop of a hat. Whatever your choice, the cloud can pose problems. If you’re using cloud-only backups, you might not have access to backups if there’s an outage from the cloud vendor, from your ISP, or if the a local disaster causes your office troubles. On top of that, you may only have files and folders stored in the cloud depending on what you use, which won’t do you any good in a post-disaster scenario.

Solution: Redundancy

There are three things to think about with cloud backups. Is storing only files and folders enough or would you rather have entire system images stored? What will you do if you lose Internet access? And what if your cloud vendor loses your data?

Redundancy can certainly save your butt when it comes to cloud backup. Remember that since you’re relying on the Internet, you’ll want as many ways to access the net as possible. Make sure you’ve got at least two ways to connect to the Internet from two different ISPs to have redundancy. If you’ve got entire infrastructure items running from the cloud, it’s wise to back them up and replicate them somewhere locally. You never know what might happen to something you put in someone else’s hands, so take matters into your own hands by having a local copy you control in addition to what you’ve got in the cloud. Lastly, file and folder backup services are really easy and inexpensive, but are they enough to get your business back on track if your office is destroyed? Be sure to use a backup solution that’s adequate for full recovery, not something that’s just cheap and easy to use.

For additional information, this article discusses some other considerations you might take when you’re thinking about cloud backups.

Problem method: Backing up virtual machines at the host level

Virtualizing machines has a lot of advantages, not the least of which are cost savings, scalability, and enhanced testing capabilities. The issue is that VMs add another layer of complexity to the backup process.  For instance, software like ShadowProtect leverages Microsoft VSS directly when it performs backup processes. The ShadowProtect agent resides at a low level in the operating system allowing it to directly access disk resources and communicate with applications to create efficient backup images. When you’ve got a VM client running on a hypervisor host, you can either directly work with VSS inside the VM client or you can rely on the hypervisor host to communicate with VSS on the client and to properly handle any errors which may occur. This added level of communication and coordination with VSS services may add unnecessary risk to backup processes.

Solution: Backup the guest and the host

It’s important to understand that the more lines of communication between the host and the VM, the more opportunity to introduce risk and to prevent taking a reliable backup. It’s always best to backup the host and each individual VM separately in order to be positive you have everything backed up.

However, it may not always be necessary to backup the host, and many IT Professionals simply expect that after a disaster occurs they will need to reinstall the host software on a new physical machine before they can begin a recovery process. With ShadowProtect it’s possible to backup both a Windows Hyper-V host and the VM clients if that’s what you want to do. One reason you may want to backup the host is if you’ve done any customizing or have any applications/roles installed in addition to the Hyper-V role.

Backing up a Microsoft host server makes sense in this case as it minimizes the time needed to recover the host by backing up those additional programs and any customization you’ve done. StorageCraft recommends backing up the VM client directly with ShadowProtect. With popular hypervisor hosts like VMware you may want to consider running the host OS from a USB flash drive with at least one additional USB flash drive available for recovery purposes.

This allows you to quickly restore the host OS and then focus your efforts on recovering the VM clients. With Microsoft Hyper-V it’s also possible to backup your hypervisor host in addition to each client. Since we are known for legendary reliability,—or a better word: Recover-Ability—we provide you with the tools to backup and to recover your critical business systems the way you want, every time and everywhere. You know that ShadowProtect will provide you with fast and efficient backups of your virtual systems. You also know we provide protection for your physical systems as well. If you want to protect both, we’ve got you covered there too.  After all, it’s all about the recovery and about providing users with peace of mind.

Problem method: backing up database servers

Many vendors leverage Microsoft’s Volume Snapshot Service (VSS) when taking a backup of database servers like SQL database. Most of the time, VSS is sufficient to quiesce a database to get a good snapshot. But “most of the time” is not a very good answer when it comes to taking reliable backup images of your critical business systems.  For that matter, who would want to fly in an airplane that functions properly “most of the time”?  Or who would go in for surgery with a doctor who is successful “most of the time”?  If your data is important you want a backup that works every time.

Solution: Multiple methods of database backup

This is where StorageCraft goes the extra mile to ensure that your backup images are as reliable as they can be.  Whenever ShadowProtect takes a backup image of your system, we will first attempt to use VSS to get a reliable snapshot. However, if VSS fails to get a clean backup then we begin a second attempt where our code uses the DLL’s called by VSS to attempt to get a clean backup. This second process allows us better management of the backup process and better response to any errors that may occur. This gives us another opportunity to get a reliable backup of your data at that point of time where other backup systems might simply fail.

If this second method fails for some reason, then we attempt the backup process again a third time, this time using our own snapshot driver, which we install as part of the backup software application. We use this driver the third time to attempt to get a reliable backup if the normal processes with VSS have failed. In essence we’re being as thorough as we possibly can to ensure that a backup can be produced at that point in time, and that the backup produced is rock solid.

We do everything possible to ensure that you get the most reliable backup of your business system as possible. And really, if you don’t have a solid backup you won’t have a reliable recovery–and we’re all about the recovery.

For a little more detail on StorageCraft’s approach to backing up SQL and other database servers, check out this article.

As you can see, the StorageCraft Recover-Ability solution can solve a lot of the biggest problems when it comes to backing up your essential data. To learn more about what it can do for you, visit this page.

Photo credit: Tomasz Stasiuk via Flickr