Why Hybrid Cooling is the Future for Data Centres

Rising rack and power densities are driving significant interest in liquid cooling for many reasons. Yet, the suggestion that one size fits all ignores one of the most fundamental aspects of potentially hindering adoption - that many data centre applications will continue to utilize air as the most efficient and cost-effective solution for their cooling requirements. The future is undoubtedly hybrid, and by using air cooling, containment, and liquid cooling together, owners and operators can optimise and future-proof their data centre environments. By Gordon Johnson, Senior CFD Manager, Subzero Engineering.

Today, many data centres are experiencing increasing power density per IT rack, rising to levels that just a few years ago seemed extreme and out of reach, but today are considered both common and typical while simultaneously deploying air cooling. In 2020 for example, the Uptime Institute found that due to compute-intensive workloads, racks with densities of 20 kW and higher are becoming a reality for many data centres.

This increase has left data centre stakeholders wondering if air-cooled IT equipment (ITE), along with containment used to separate the cold supply air from the hot exhaust air, has finally reached its limits and if liquid cooling is the long-term solution. The answer is not as simple as yes or no, however.

Moving forward it’s expected that data centres will transition from 100% air cooling to a hybrid model encompassing air and liquid-cooled solutions with all new and existing air-cooled data centres requiring containment to improve efficiency, performance, and sustainability. Additionally, those moving to liquid cooling may still require containment to support their mission-critical applications, depending on the type of server technology deployed.

One might ask why the debate of air versus liquid cooling is such a hot topic in the industry right now? To answer this question, we need to understand what’s driving the need for liquid cooling, the other options, and how can we evaluate these options while continuing to utilize air as the primary cooling mechanism.

Can Air and Liquid Cooling Coexist?

For those who are newer to the industry, this is a position we’ve been in before, with air and liquid cooling successfully coexisting while removing substantial amounts of heat via intra-board air-to-water heat exchangers. This process continued until the industry shifted primarily to CMOS technology in the 1990s, and we’ve been using air cooling in our data centres ever since.

With air being the primary source used to cool data centres, ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) has worked towards making this technology as efficient and sustainable as possible. Since 2004, its published a common set of criteria for cooling IT servers with the participation of ITE and cooling system manufacturers entitled “TC9.9 Thermal Guidelines for Data Processing Environments”.

ASHRAE has focused on the efficiency and reliability of cooling the ITE in the data centre. Several revisions have been published with the latest being released in 2021 (revision 5). This latest generation TC9.9 highlights a new class of high-density air-cooled ITE (H1 class) which focuses more on cooling high-density servers and racks with a trade-off in terms of energy efficiency due to lower cooling supply air temperatures recommended to cool the ITE.

As to the question of whether or not air and liquid cooling can coexist in the data centre white space, it’s done so for decades already, and moving forward, many experts expect to see these two cooling technologies coexisting for years to come.

What Do Server Power Trends Reveal?

It’s easy to assume that when it comes to cooling, a one-size will fit all in terms of power and cooling consumption, both now and in the future, but that’s not accurate. It’s more important to focus on the actual workload for the data centre that we’re designing or operating.

In the past, a common assumption with air cooling was that once you went above 25 kW per rack it was time to transition to liquid cooling. But the industry has made some changes in regards to this, enabling data centres to cool up to and even exceed 35 kW per rack with traditional air cooling.

Scientific data centres, which include largely GPU-driven applications like machine learning AI and high analytics like crypto mining, are the areas of the industry that typically are transitioning or moving towards liquid cooling. But if you look at some other workloads like the cloud and most businesses, the growth rate is rising but it still makes sense for air cooling in terms of cost. The key is to look at this issue from a business perspective, what are we trying to accomplish with each data centre?

What’s Driving Server Power Growth?

Up to around 2010 businesses utilized single-core processors, but once available, they transitioned to multi-core processors, however, there still was a relatively flat power consumption with these dual and quad-core processors. This enabled server manufacturers to concentrate on lower airflow rates for cooling ITE, which resulted in better overall efficiency.

Around 2018, with the size of these processors continually shrinking, higher multi-core processors became the norm and with these reaching their performance limits, the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption. Server manufacturers have been packing in as much as they can to servers, but because of CPU power consumption, in some cases, data centres were having difficulty removing the heat with air cooling., creating a need for alternative cooling solutions, such as liquid.

Server manufacturers have also been increasing the temperature delta across servers for several years now, which again has been great for efficiency since the higher the temperature delta the less airflow that’s needed to remove the heat. However, server manufacturers are, in turn, reaching their limits, resulting in data centre operators having to increase the airflow to cool high-density servers and to keep up with increasing power consumption.

Additional Options For Air Cooling

Thankfully, there are several approaches the industry is embracing to cool power densities up to and even greater than 35 kW per rack successfully, often with traditional air cooling. These options start with deploying either cold or hot aisle containment. If no containment is used typically, rack densities should be no higher than 5 kW per rack, with additional supply airflow needed to compensate for recirculation air and hot spots.

What about lowering temperatures? In 2021, ASHRAE released their 5th generation TC9.9 which highlighted a new class of High-Density Air-Cooled IT equipment, which will need to use more restrictive supply temperatures than the previous class of servers.

At some point, high-density servers and racks will also need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500 watts per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone.

Liquid cooling is not going to be the ideal solution or remedy for all future cooling requirements. Instead, the selection of liquid cooling instead of air cooling has to do with a variety of factors, including specific location, climate (temperature/humidity), power densities, workloads, efficiency, performance, heat reuse, and physical space available.

This highlights the need for data centre stakeholders to take a holistic approach to cooling their critical systems. It will not and should not be an approach where we’re considering only air or only liquid cooling moving forward. Instead, the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application.

Relocation and replacement of cooling infrastructure on mission critical live data centre.
The positive impact of data centres on people, society, business and government. By Ed Ansett,...
By Klaus Dafinger, Cooling Marketing Manager at Legrand.
By Jesse Hagar, Product Line Manager, Parker Chomerics.
By Mike Meyer, Managing Director of Portman Partners.
By Ian Jeffs, UK&I Country General Manager at Lenovo Infrastructure Solutions Group.
By David Watkins, Solutions Director at VIRTUS Data Centres.
By Alan Stewart-Brown, VP EMEA, Opengear.