There’s no denying the importance of backup and disaster recovery plans. The goal of any DR plan is to keep data secure and keep downtime to a tolerable minimum. With many spinning plates, it’s easy for MSPs to forget these four important essentials that will ensure success when their plans are tested for real:
Recovery Objectives
When you tell clients how expensive downtime can be, they’ll begin to wonder what it takes to reduce it. This leads us to recovery objectives. Without recovery point and recovery time objectives, how can you know what a client’s tolerance for downtime is? How do you know what specifications your DR plan must meet? First, set recovery objectives with your client. These usually hinge on their downtime tolerance and budget. Without goals, it’s difficult to say whether a DR plan is ever really successful. Next, build your DR plan to suit.
Redundancy Redundancy
There’s a long list of items to include in a DR plan, but as you know, redundancy is key. Most DR plans include redundancy for things like backups, power, and communications. But to really fortify a business against downtime, more redundancy doesn’t hurt. As you look at which systems are most critical (i.e., which systems must run immediately after a downtime event), consider what redundancies to include, whether it’s for power, recovery methods, or what have you.
Documentation
For some MSPs, a DR plan is a general set of instructions that never end up on paper. To ensure a quick, effective response to various downtime events, write everything down. Be sure to store hard copies in safe places and soft copies in the cloud where you can access them remotely. In general, a good DR plan will include the who, what, where, and when for various downtime events:
- What types of downtime events might you need to recover from?
- Who are the key people? How do you contact them?
- Who else might you need to contact (utilities, service provides, etc.)?
- Which machines must be restored first?
The list goes on and on. Remember that your documentation can determine your success. It can be the difference between meeting your SLAs and your client losing thousands to downtime and data loss.
Tests and Simulations
So, you created your plan. Everyone, from your team to your client, understands it and knows how to proceed if something happens. You’re ready, right? Not quite. No DR plan is complete without testing. From the individual backups to the redundancies that will keep your clients running, it’s wise to test everything. There are a few ways to approach this.
A checklist is a basic way to ensure that you can recover individual machines, and that backup power and utilities all work. This gives you some degree of certainty that you can recover, but to be positive, you should simulate downtime events. These events can be anything from DDoS attacks to hardware failure, or even power outages and natural disasters. Work with your client to schedule some time afterhours or on weekends to simulate a few different scenarios.
Conclusion
As you create DR plans, be certain that your client’s uptime expectations are mapped to recovery objectives, SLAs, and the equipment it takes to succeed. If you take your time developing a strategy, documenting a plan, and testing it, you’ll be ready when the time comes.
View Comments
-
-
Yes, a span size of two means that each span is as small as possible. So a span size of two in RAID 100 means that you are actually getting RAID 10 without anything extra (it is the middle RAID 0 that is eliminated.) So the advice is good, basically you always want a span size of two if the option exists. Some controllers cannot handle a RAID 10 large enough to accommodate all attached drives and so larger spans are required. Typically this does not happen until you have at least ~18 drives or so.
-
The one question I have coming out of this results from the conversation that I believe possibly prompted this blog post, namely that in this thread on SpiceWorks:
http://community.spiceworks.com/topic/548896-raid-10-2-spans-a-cautionary-tale-it-can-happen-to-you
The recommendation/default for at least one DELL controller model was a span-size of 2, with comments referring to this being referred to as the optimal configuration for larger arrays. Is there any evidence to support this being the optimal configuration? Your blog post, and my (albeit limited) understanding of RAID would suggest that this advice is flawed. Then again, maybe I am misunderstanding something at a fundamental level?
Furthermore, would there be any benefit to adding in multiple RAID-0 layers above the RAID-100 so that the member size of all arrays involved is kept as small as possible?
-
I like the article, to be honest I've seen many posts on newspapers, magazines and even blogs that praises the open-source as it without being put on glory or hell, just neutral
I'll like to add some other software like Thunderbird (for email), Git (for developers) and maybe replace Notepad++ with Geany/Gedit/Kate (or the text editor of your preference, yours being the Notepad); otherwise I like your choices and those are apps that I use a lot, even if in my workplace they don't want to replace it
-
I have over 100 VHS tapes to discard. Are they recyclable?/
What to do with them? Dom -
Hey Dom, depending on where you're located there are a number of ways you can dispose of VHS tapes. Most thrift shops will take them off your hands, assuming they're actual movies and not simply blank tapes. Another option is to use Greendisk (greendisk.com), which allows you to mail in your old VHS tapes for recycling. Beyond that, there may be some options specific to your location (there are waste recycling facilities that can handle this type of trash all over), a quick Google search might reveal some of them.
-
Hi there, I think your web site may be having internet browser compatibility problems.
Whenever I look at your web site in Safari, it looks fine
however when opening in I.E., it has some overlapping issues.
I simply wanted to provide you with a quick heads up!
Besides that, wonderful site! -
Thanks for letting us know, we really appreciate it. Do you happen to know which version of IE you're using? I know that sometimes the older versions don't cooperate. I can't seem to reproduce the results you're seeing, but we're looking into it. Thanks again for bringing this to our attention.
-
I think you are missing the point entirely here. I have a home with 5 PCs all running same Windows OS version and same versions of Office. MOST of the file data on the machines are copies of same files on other machines: the Windows OS files and Office binaries. I want to backup full system snapshot images (not just photos and music) daily to a NAS on my LAN, or even a headless Windows machine acting as a NAS (like the old Windows Home Server product). I want the bandwidth savings of laptops backing up over wifi to notice that those windows files are already stored and not transmit them over wifi. I also want the total NAS storage of all combined backups reduced so that I can copy the NAS storage to either external drive for offsite storage, or more interesting up to the cloud for redundancy. ISP bandwidth caps, limited upstream bandwidth, and cloud storage annual cost per GB mean that deduplicated backup storage is essential. The cost of additional local storage is NOT the only consideration.
I don't care about Windows Server's integrated deduplication. The deduplication has to be part of the backup system itself, especially if you are doing cluster or sector level deduplication, to avoid sending the duplicate data over the wire to the data storage in the first place.
I've been looking at different backup solutions to replace Windows Home Server (a decade-old product that offered deduplication), and your product looked very interesting, but unfortunately the lack of built-in deduplication rules it out for me. I can only imagine how this affects 100-desktop customers when I wont't even consider it for 5-desktop home use.
-
Thank you for your comments. We appreciate all points of view on this topic.
I agree that ISP bandwidth caps, limited upstream bandwidth, and cloud storage cost per GB show how critical it is to minimize data transmissions offsite. I also believe that much like modems and BETA video tapes, the bandwidth of today is giving way to higher access everywhere. For example, Google Fiber is now available to some of my peers at the office. Cellular LTE and satellite technologies are also increasing bandwidth for small business and home offices. At the same time, our data consumption and data creation is increasing at a rate that may outpace this increased supply of bandwidth. Either way, there are ways to work around data transmission limits.
One way we help with data transmission over slower networks is we incorporate WAN acceleration and bandwidth scheduling technologies into our offsite replication tools. These allow you to not only get the most efficient use of available bandwidth but to also schedule your data replication during off-peak hours. Another way we help with data transmission is through compression. Deduplication is after all simply another form of data compression which reduces the near side (source) data before it is transmitted over the wire (target).
In your case, you could use our product to store images on a local volume which has deduplication. You could then replicate data over the wire to offsite storage using ImageManager or some other tool. Many of our customers do this very thing.
Keep in mind that the deduplication process has to occur at some point: either at the source or at the target. If you wanted to deduplicate your 5 PCs you would be best served with a BDR solution that can read each of those PCs, see the duplicate files on each, and avoid copying those files to storage. In this example, deduplication would occur on your BDR but you're still reading data from each PC over the wire to your BDR. In addition, your BDR would control the index for data stored on a separate volume or perhaps has the storage volume incorporated in the BDR. This creates a single point of failure because if your BDR crashes then the backup images for your 5 PCs wouldn't be recoverable and current backup processes cease.
At StorageCraft we focus on the recovery. Our philosophy means that we take the smallest fastest backup images we can and then we give you ways to automatically test those images for reliability, compress them into daily/weekly/monthly files according to your retention policy, and replicate those images locally and offsite. This gives you a solid foundation from which to recover those images quickly to almost any new environment. I have yet to see a faster more reliable solution among our competitors.
Cheers,
Steven -
Regarding Shadowprotect desktop:
I am looking for the following capabilities
1. Windows 8.1 compatability
Everything I've examined says Win 8 but nothing about Win 8.1
2. I want to be able to do the following on an ACER S-3:
320gb hd with Win 8.1
create an image of the 320gb drive.
Install a 120gb drive in the ACER.
Install the image to the 120gb drive.
I am assuming that I can boot from the Shadowprotect
CD, use an external usb connected dock with the 320gb
image, and successfully install the image from the
external dock to restore to the 120gb drive installed in the ACER.
3. Does Shadowprotect take care of setting up the needed
partition and format for the target drive (120gb in this case)I've looked at several of the alternatives to your product
posing the same questions above and get vague or downright
misleading answers to my items 1, 2 AND 3 above.If I purchase your product will I be able to do what I
want as stated in items 1,2 and 3 above?I have done exactly what I described in items 1,2 and 3
above for WIN 7 using a product called EZGIG II and am
pleased with the results. I am looking for the same
capability for Win 8.1.Please avise,
Joe O'Loughlin -
Is the whitepaper still available? The link seems to be broken. Thanks!
-
I just fixed that download link. Thanks for letting us know and enjoy the paper!
-
Hello,
I'm just wondering if any of you have actually tested this scenario in the end and come to any conclusion since this article was published.
Thank you!
1
2
3
…
10
Next
VMware Player is not a Type 1 hypervisor, and therefore does not have better performance than Virtualbox "because it runs directly on the hardware."""