May
21

3 Points To Consider When Dealing With Bandwidth Constraints

3 Points To Consider When Dealing With Bandwidth Constraints

May 21
By

Bandwidth. You know it can vary greatly depending on the pipes being used. You also know that things like latency, packet loss, and even cable issues mean that bandwidth rarely travels at its optimal speed. And you know that bigger, faster pipes carry higher costs.

So how do you incorporate bandwidth constraints into an effective backup and recovery plan? Here are three points to consider (hat-tip to StorageCraft technical marketing manager Steve Snyder for his help):

1. Make sure your primary backup is onsite.

You have a lot more options for bandwidth speed when you perform an onsite backup. USB 3.0 and 3.1 and Gigabit Ethernet (GbE) are always going to be faster—and significantly cheaper—than the fastest Internet connection. And most data failures are going to be the result of something that doesn’t bring down your entire data center, such as a failed hard drive within a NAS or a multitude of human error scenarios.

It helps if that onsite backup is a disk image that is updated in an iterative fashion (i.e. incrementally), rather than a series of files and folders. Disk images, like the ones created by ShadowProtect, provide three obvious advantages:

  • The ability to choose among multiple versions of a corrupted or missing file
  • The ability to easily recover or migrate the information stored on the corrupted drive onto a new physical hard drive, virtual machine, or other destination
  • The ability to continually test your backups so that you know how long it will take to perform a restore and know ahead of time if any problems exist in the backup image, so that you can fix them long before that image is needed.

2. Make an inventory of your mission-critical data and develop an offsite storage scenario for data retrieval.

Cloud storage is great. If you’ve been following the tech press recently, you know that Amazon, Google, and Microsoft are in a price war that has dropped prices down to $0.01 per GB of storage.

However, there’s a big difference in cost between storage and retrieval. For example, Amazon Glacier doesn’t charge anything for the first TB of data transferred to another location. The next 10TB only costs $0.12 per GB to retrieve, which adds up to about $1,200 for those next 10TB.

But Glacier then charges $0.90 per GB to retrieve the subsequent 40TB worth of data. That means you will pay $9,000 to retrieve an additional 10TB past that initial 10TB. Moreover, a service like Glacier is designed primarily for archiving data, not for accessing it on a regular basis. You would need a different AWS service to get an acceptable bandwidth speed, which will cost you significantly more than a service like Glacier does.

Unless you have an unlimited budget, you’re going to have to decide what data you can’t possibly live without and then choose the right service that allows you to retrieve that data at a reasonable bandwidth speed.

3.  When choosing an offsite storage and recovery provider, check the SLAs.

An offsite backup and recovery service will have SLAs for factors like availability and bandwidth speeds. If this provider guarantees 99.999% availability and 100 Mbps of download speed and fails to deliver on these benchmarks for any reason (check the small print for exceptions), then they will owe you money. And money tends to motivate providers to hold to those guarantees.

Can you recommend other ways to overcome bandwidth constraints in BDR plans? Let us know in the comments!

Photo Credit: Univ. of Michigan via Flickr