New efficiency standard will spur energy innovation in data centres

By Victor Avelar, Director and Senior Research Analyst at Schneider Electric’s Data Center Science Center.

By focussing on the trade offs between mechanical load and electrical losses as a means to ensure energy efficiency, ASHRAE’s new Energy Standard for data centres is paving the way for industry best practices and a standards-based approach to data centre design. 

 

Last week a UK news article publicised a long awaited Energy Standard for Data Centres by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). Classed as Standard number 90.4-2016, it establishes the minimum energy efficiency requirements for data centres and includes recommendations on their design, construction, operation and maintenance as well as on the use of on-site and off-site renewable energy.

 

ASHRAE’s earlier 90.1 standard applies to energy efficiency in buildings generally and is widely referred to in building regulations. 90.4 is a performance-based design standard and takes account special considerations affecting data centres, including variations in both mechanical load and electrical losses across different climate zones.

 

Calculations for both electrical and mechanical components are made and then compared to the maximum allowable values for the appropriate climate zone. Compliance with the standard is achieved when the calculated values do not exceed the values contained in the standard.

 

Crucially the new standard does not require a Power Usage Effectiveness (PUE) rating to ensure compliance, although this was considered at an earlier stage of the drafting process. In this, the Society clearly recognises that energy management in data centres is a more complex problem than can be resolved with a single metric such as PUE, useful though that figure certainly is in guiding energy-efficiency efforts.

 

Recent research detailed in Schneider Electric’s White Paper 221, ‘The Unexpected Impact of Raising Data Centre Temeperatures’, found that only a full understanding of the cooling and power infrastructure of the data centre AND the operational requirements of the IT equipment itself will yield optimum results in terms of efficiency and power consumption.

 

Laying undue emphasis on a single metric such as PUE for efficiency, or on simple strategies such as allowing ambient temperatures to rise as a means of reducing overall power consumption are insufficient in themselves. The theory supporting raised temperatures is that cooling equipment can operate in economy mode and will not need to be used as frequently, resulting in a lower energy requirement.

 

However, experience shows that the results of this strategy have been mixed.

 

PUE has the advantage of simplicity, in that it represents efficiency as a single metric allowing data centre operators to measure the effectiveness of the power and cooling systems over time. However, it is quite limited as it measures only the relative difference between power consumed on IT equipment and the energy consumed on IT and infrastructure combined.

 

Therefore,  lowering your PUE rating does not necessarily mean that your overall energy consumption has been reduced. In fact, PUE is only a measure of how efficient the physical infrastructure systems are in providing power to the IT load. It says nothing about the total energy being consumed by the data centre and is more indicative of a ratio, not a value that indicates a quantity of energy. 

 

In essence your PUE can improve (i.e., power and cooling systems are more efficient) but your energy use throughout the data centre might be the same or higher.

By allowing chillers to operate in economiser mode for a greater part of the year does indeed produce immediate energy savings, these are offset by the greater burden placed on other parts of the cooling infrastructure. Air coolers for example, must operate when the chillers are in economiser mode and the fans both in the server racks themselves and in the CRAH (computer room air handlers) units have to work harder, and use more energy, as temperature rises.

Schneider Electric has completed studies of data centres in very different climactic regions and the consequences of allowing temperature to rise can vary greatly depending on the location and whether or not a data centre is operating at full load.

When the data centre was operating at full load and temperatures were allowed to float between 15.6 and 25.7C, rather than be maintained at the lower level, energy efficiency and total cost of ownership were both improved in Seattle; energy efficiency improved slightly but total costs were unchanged in Chicago; and in Miami, a hotter climate, both efficiency and total costs were worsened.

At half load, energy and total costs were improved in both Chicago and Seattle but they increased again in Miami. One reason for increased overall cost at high temperatures is the effect on the reliability of IT equipment. Servers and storage products tend to have higher rates of failure when operating at higher temperatures.

The team at Schneider Electric’s Data Center Science Center concluded that although operating at higher temperatures can be a useful strategy, care must be taken when implementing it to ensure optimal effects. Necessary steps include the adoption of air-management practices, such as the use of hot or cold-aisle containment systems, to reduce the risk of hot spots; the cooling architecture of a data centre should be designed to handle elevated temperatures; and the design should also take into account the business growth plan as data centre behaviour may vary as the IT load changes.

In addition, greater collaboration with IT equipment maufacturers is necessary to gain a better understanding of how the operational IT load and how its reliability is impacted at high temperatures.

By allowing greater latitude to data centre designers to build their facilities to their specific requirements and by taking into account the differing load and cooling strategies that must be deployed in differing climactic regions, ASHRAE’s new 90.4 standard will encourage innovation in the development of efficient data centres, resulting in more reliable, efficient and cost effective IT services.

By Stuart Farmer, Sales Director, Mercury Power.
By Nick Bannister, vice president sales for Arrow’s enterprise computing solutions business in...
Here are the top six trends according to Brent Owens, Director Sales & Partner Enablement EMEA for...
By Paul Flannery VP of International Channel Sales at ERP provider, Epicor.
By Chris McKie, VP, Product Marketing Security and Networking, Datto, a Kaseya company.