Hardware failure for servers can happen due to overheating, which is one of the reasons server rooms and data centers are hosted in climate controlled environments. A loss of air conditioning during the summer months can sometimes mean the death knell for a server farm.
But what about the cold periods during winter time? Are cold temperatures something that can adversely affect server functionality? Ultimately, maintaining a quality HVAC system to control the server room climate is probably the best thing that IT operations staff or service providers need to give their focus.
Are there Standards for Cold Weather Server Room Operations?
Service providers and IT administrators who are located in colder climates need to be aware that the American Society of Heating, Refrigerating and Air-Conditioning Engineers actually published a set of guidelines that deal with the minimum temperatures for server operation. ASHRAE states that Class 1 and 2 equipment shouldn’t operate at temperatures lower than 18 degrees Celsius, which converts to the relatively mild 64 degrees Fahrenheit. The standard for Class 3 and 4 equipment is a bit looser at 5 degrees Celsius, 41 degrees Fahrenheit.
Those ASHRAE guidelines reveal that servers don’t really appreciate the mild, let alone the cold, so it is vital to ensure that all climate control functionality for server rooms are functioning smoothly during the winter time. Still, the fear of server failure in the cold didn’t stop Facebook from building a large data center in Lulea, Sweden, a small town located just south of the Arctic Circle. Temperatures in Lulea reach -35 below on the Celsius scale during the winter, which is also less than -30 in Fahrenheit as well!
Temperature Fluctuations more Meaningful for Server Operation than Extreme Cold
Facebook discovered that humidity plays a large role in hardware failures when delivering custom server equipment to their data center in Sweden. It’s the rapid rate of temperature fluctuations that causes condensation to form on the vital circuitry within computer equipment.
Frank Frankovsky, Facebook’s Director of Hardware and Supply Chain commented on what they discovered. “A rapid rate of change (in temperature) can create condensation on the electronics, and that’s no good. The transition is the important part. We want to make sure we don’t have a big rate of change,” said Frankovsky.
A study from the University of Toronto backed up what Facebook learned in the field: “Even failure conditions, such as node outages, that did not show a correlation with temperature, did show a clear correlation with the variability in temperature. Eﬀorts in controlling such factors might be more important in keeping hardware failure rates low, than keeping temperatures low.”
So while winter time leads to other cold-related issues, like cars not starting, it appears that rapidly changing temperatures cause the most havoc when it comes to computer hardware failure. Climate control is the key for IT operations staff, with a focus on maintaining a steady temperature and keeping the humidity low.
Want more about hardware failure? You might be interested in the results of our latest survey on hardware failure.