AI compute is accelerating power demand, which is projected to more than double by 2030 around 945 terawatt-hours (TWh) globally, applying pressure on data centres to meet rising capacity requirements.
At the same time, new EU regulations are pushing for greater efficiency and reduced environmental footprint as data centres with IT power demand of 500 kW or above must now publish their environmental KPIs, including energy usage, annually.
These new developments are reshaping expectations and prompting a reassessment of what data centre flexibility, scalability, and reliability should look like now, and in the future. The next era of data infrastructure must adapt to this demand for capacity and efficiency, requiring a reimagination of how data centres are designed, constructed, and operated.
Smarter cooling solutions
With the rising demand of AI and high-performance computing, traditional cooling methods are no longer sufficient for evolving data centres. These workloads require much more powerful hardware, particularly densely packed GPUs, which generate significantly more heat per task than conventional systems. Typical air-based systems rely on large volumes of chilled air and energy-intensive equipment. As a result, liquid cooling is becoming a pivotal component of data centre infrastructure, redefining how data centres tackle thermal challenges.
Liquid cooling uses fluids to absorb heat directly from the source. Beyond improving thermal efficiency, this approach permits higher operating temperatures, significantly reducing data centre’s reliance on chillers and compressors. This innovation elevates cooling from a technical operation to a strategic differentiator in data centre design.
NVIDIA has already proposed equipment running at peak performance using 45°C liquid cooling, unlocking a future of greater efficiency across the industry. Such advancements would make it possible to eliminate compressors and their associated F-gases in all but the most extreme climates.
Consequently, operating at higher temperatures isn’t a concession – it’s a technological leap. Imagine getting rid of the air conditioning but still keeping the room cool. Liquid cooling is the smarter, simpler, and far more sustainable way forward.
Higher density, smaller footprint
The rise of AI is also forcing a rethink of how infrastructure is designed and deployed. Data centres will continue to see more concentrated computing power, with specialised hardware delivering greater processing capability per rack. And while data centre facilities continue to sprawl, higher-density architectures can reduce the overall impact by making more efficient use of space.
Notably, higher density compute means more efficient use of capital by maximising return on physical infrastructure investment. It also provides opportunities for better operational predictability by demanding tighter integration with customer workloads.
Density-led design is setting a new benchmark for performance, efficiency, and cost-effectiveness. AI is accelerating this shift, not by shrinking data centre footprints, but by redefining how space is engineered and optimised.
Bringing IT and facility systems together
The shift in design and thermal strategy is just one part of a broader transformation. As data centre demands evolve, IT and facility systems are being integrated to operate as a single, coordinated engine.
Traditionally, data centre infrastructure was designed and operated independently from the IT equipment it supports but the relationship between IT and facility systems is shifting to enable the next generation of compute.
This separation is no longer sufficient and combining servers, power, cooling, and controls into a unified, intelligent infrastructure is now essential for delivering the performance, efficiency and reliability high-intensity compute demands.
Joining these systems together represents not only a change in infrastructure but also in attitude. Operators are treating IT hardware less like general-purpose tools and more like special-purpose factory equipment. These systems are highly specialised, precisely engineered, and tightly integrated with the systems that support it.
This approach necessitates a fresh perspective on how operators coordinate space, power, cooling, and orchestration. With better integration of systems, data centres unlock smarter energy use, faster problem resolution, and scalable, high-density deployments.
Sustainable design strategies
With the rapid rise of power-hungry AI workloads, there is more pressure than ever for data centres to address their energy intensive activities, especially as AI workloads grow in scale and intensity. Innovations in liquid cooling, energy optimisation, and efficient hardware design are helping to reduce environmental impact. But their impact is limited if the underlying energy source remains carbon intensive.
That’s why operators must consider where compute happens when managing these demands. Locating AI infrastructure in data centres powered by renewable energy offers a smarter, lower-carbon impact future. Locations such as the Nordics are rich in clean energy, enabling organisations to significantly reduce the carbon cost of innovation and high-density compute.
Designing for the future
Against a backdrop of rapid technological, economic, and social change, adaptability in data centre design isn’t just beneficial – it’s essential.
Those who embrace innovation in build processes, design infrastructure for flexibility at scale, and integrate sustainability not as an afterthought, but as a foundational principle, hold the key to success. These strategies aren’t passing trends. They’re fast becoming the new standard for high-performing, future-ready data centres.