There is no such thing as a computer error. It might sound crazy, but it’s completely true. In one form or another, computers cannot be held responsible for their actions. Humans write the programs, humans build the hardware. Until computers reach the technological singularity or people become infallible, there will be errors.
But if it’s never the computer’s fault, how or why does data get corrupted? If you write a code that is theoretically perfect, how can it ever have an error?
The problem is that your code isn’t the only thing involved. You could feasibly write a code that works perfectly all by itself, and there are codes that by themselves could work perfectly. The problem is that this perfect code will likely depend on another code, which may or may not be perfect.
Let’s use a simple analogy to explore what I mean.
Suppose my significant other and I share a car. Ninety-five percent of the time everything works out perfectly, she drops me off at work, then travels to her work and all is well. But one day she needs to go into work early for a big presentation but because she isn’t programmed for this situation, she doesn’t tell me the night before. Morning comes and I wake to a significant other who doesn’t understand why I’m not ready to leave.
So she goes to work without me and I’ve got to take a cab. A slight communication issue caused a pretty big error. Since we depend on perfect communication for everything to function properly, when certain requests aren’t made or can’t be made, we have an error. Worse yet, imagine if there was a third or fourth member of our carpool, we’d have a pile of other potential communication errors to contend with every day.
Computers are the same as me and my imaginary carpool, they depend on perfect communication for all to work out, but sometimes things get lost in conversion.
According to an article by Martry2, the founder of The Coders Lexicon, any program you write has the potential of failing through no fault of your own simply because any program of considerable size has to rely on the programming of somebody else. Your software program is always going to be a piece of the puzzle, not the puzzle itself, and try as we might, the pieces don’t always fit.
For another example, we could look at an article we posted awhile back about backing up Hyper-V host machines and virtual machines separately. I mentioned in that article that the more layers of separation between ShadowProtect and the backup you’re taking, the higher the likelihood of error.
The reason is that ShadowProtect has to work with the operating system to take backups, but when you virtualize a machine, ShadowProtect and the operating system both have to work with a hypervisor for a backup job. Because of that mediation, any imperfections in the way these pieces of software communicate can become very apparent and errors that would never have existed can start to appear.
Taking it a little bit further, if you were to run a virtual machine inside a virtual machine (it’s starting to sound like Inception) you increase the likelihood of error even further because of the layers of mediation between the operating system, ShadowProtect, the virtual machine, and the virtual machine inside the virtual machine (not to mention the strain on the hardware). Again, as we’ve mentioned before, that’s why it’s best to periodically check the VSS records on a virtual machine to check for backup errors.
So what’s the point of all this? Well, the next time you scream at your computer for not working properly, it’s likely that either you’re not doing something right, or, in one way or another, somebody else screwed up (I always put my money on somebody else). No matter what the problem is people are always to blame.