Working in the SMB world, it is actually pretty rare that we need to talk about latency. The SMB world is almost universally focused on system throughput and generally unaware of latency as a need. But there are times where latency becomes important and when it does it is critical that we understand the interplay of throughput and latency and just what “speed” means to us. Once we start moving into the enterprise space, latency is more often going to be viewed as a concern, but even there throughput nearly always reigns supreme, to the point that concepts of speed almost universally revolve around throughput and concepts of latency are often ignored or forgotten.
Understanding the role of latency in a system can be complicated, even though latency itself is relatively simple to understand.
A great comparison between latency and throughput that I like to use is the idea of a Ferrari and a tractor trailer. Ferraris are “fast” in the traditional sense, they have a high “miles per hour.” One might say that they are designed for speed. But are they?
We generally consider tractor trailers to be slow. They are big, lumbering beasts that have a low top-end speed. But they haul a lot of stuff at once.
In computer terms we normally think of speed like hauling capacity – we think in terms of “items” per second. While a Ferrari going two hundred miles per hour is pretty cool, it can haul maybe one box at a time. A tractor trailer can only go one hundred miles per hour but can haul closer to one thousand boxes at a time. When we talk about throughput or speed on a computer this is more what we think about. In network terms we think of gigabytes per second and are rarely concerned with the speed of an individual packet, as a single packet is rarely important. In computational terms we think about ideas like floating-point operations per second, a similar concept. No one really cares how long a single FLOP (floating-point operation) takes, only how many we can get done in one or ten seconds.
So when looking at a Ferrari we could say that it has a useful speed of two hundred box-miles per hour. That is for every hour of operations, a Ferrari can move one box up to two hundred miles. A tractor trailer has a useful speed of one hundred thousand box-miles per hour. In terms of moving packages around, the throughput of the tractor trailer is easily five hundred times “faster” than that of the Ferrari.
So in terms of how we normally think of computers and networks a tractor trailer would be “fast” and a Ferrari would be “slow.”
But there is also latency to consider. Assuming that our payload is tiny, say a letter or a small box, a Ferrari can move that one box over a thousand miles in just five hours! A tractor trailer would take ten hours to make this same journey (but could have a LOT of letters all arriving at once.) If what we need is to get a message or a small parcel from one place to another very quickly, the Ferrari is the better choice because it has half the latency (delay) from the time we initiate the delivery until the first package is delivered than the tractor trailer does.
As you can imagine, in most cases tractor trailers are vastly more practical because their delivery speed is so much higher. And, this being the case, we actually see large trucks on the highways all of the time and the occurrence rate of Ferraris is very low – even though each cost about the same amount to purchase (very roughly). But in special cases, the Ferrari makes more sense. Just not very often.
This is a general case concept and can apply to numerous applications. It applies to caching systems, memory, CPU, networking, operating system kernels and schedulers, to cars and more. Latency and throughput are generally inversely related – we give up latency in order to obtain throughput. For most operations this makes the best sense. But sometimes it makes more sense to tune for latency.
Storage is actually an odd duck in computing where nearly all focus on storage performance is around IOPS (input/output operations per second), which is roughly a proxy measurement for latency, instead of throughput which is measured in “data transferred per second.” Rarely do we care about this second number as it is almost never the source of storage bottlenecks. But this is the exception, not the rule.
Latency and throughput can have some surprising interactions in the computing world. When we talk about networks, for example, we typically measure only throughput (Gb/s) but rarely care much about the latency (normally measured in milliseconds.) Typically this is because nearly all networking systems have similar latency numbers and most applications are pretty much unconcerned with latency delays. It is only the rare application like VoIP over International links or satellite where latency affects the average person or can sometimes surprise people when they attempt something uncommon like iSCSI over a long distance WAN connection and suddenly latency pops up to surprise them as an unforeseen problem.
One of the places where the interaction of latency and throughput starts to become shocking and interesting is when we move from electrical or optical data networks to physical ones. A famous quote in the industry is:
Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
— Andrew S. Tanenbaum
This is a great demonstration of huge bandwidth with very high latency. Driving a single station wagon or SUV fifty miles across town could haul hundreds of petabytes of data hitting data rates that 10GB/s fiber could not come close to. But the time for the first data packet to arrive is about an hour. We often discount this kind of network because we assume that latency must be bounded at under about 500ms. But that is not always the case.
A group in Australia performed a test to see if a pigeon carrying an SD card could, in terms of network throughput, outperform the regions ISP – and the pigeon ended up being faster than the ISP!
In terms of computing performance we often ignore latency to the point of not even being aware of it as a context in which to discuss performance. But in low latency computing circles it is considered very carefully. System throughput is generally greatly reduced (it becomes common to target systems to only hit ten percent CPU utilization when more traditional systems target closer to ninety percent) with concepts like real time kernels, CPU affinity, processor pinning, cache hit ratios and lowered measuring all being used to focus on obtaining the most immediate response possible from a system rather than attempting to get the most total processing out of a system.
Common places where low latency from a computational perspective is desired is in critical controller systems (such as manufacturing controllers where even a millisecond of latency can cause problems on the factory floor) or in financial trading systems where a few milliseconds of delay can cause investments to have changed in price or products to have already been sold and no longer be available. Speed, in terms of latency, is often the deciding factor between making money or losing money – even a single millisecond can be crippling.
Technically even audio and video processing systems have to be latency sensitive, but most modern computing systems have so much spare processing overhead and latency is generally low enough that most systems, even VoIP PBXs and conferencing systems, can function today with only very rarely needing to be aware of latency concerns on the processing side (even networking latency is becoming less and less common as a concern). The average system administrator or engineer might easily manage to go through a career without ever needing to work on a system that is latency sensitive or for which there is not so much available overhead as to hide any latency sensitivity.
Defining speed, whether that means throughput, latency or even something else or some combination of the two, is something that is very important in all aspects of IT and in life. Understanding how they affect us in different situations and how they react to each other with them generally existing in an indirect relationship where improvements in throughput come at a cost to latency or vice versa and learning to balance these as needed to improve the systems that we work on is very valuable.