Logo

AI deployment at Deep Green’s Urmston data centre

Deep Green has announced AI-ready colocation capacity at its Urmston facility in Manchester, with infrastructure designed for high-density AI and HPC workloads and deployment timelines of around four weeks.

Deep Green has announced AI-ready colocation capacity at its Urmston site in Manchester, with deployments possible in as little as four weeks. The facility is positioned as a rapid deployment option for organisations seeking AI infrastructure in the UK.

The site is designed to support high-density artificial intelligence and high-performance computing (HPC) workloads. For many organisations, scaling artificial intelligence is increasingly constrained by infrastructure rather than GPUs or software. Power availability, planning delays and legacy data centre designs can extend the delivery of new capacity to several years.

Deep Green uses a modular architecture intended to enable AI workloads to be deployed in weeks, providing organisations with access to UK-hosted compute capacity. The Urmston facility supports rack densities of up to 150kW, suitable for GPU clusters and high-performance computing workloads.

The infrastructure operates at a Power Usage Effectiveness (PUE) of below 1.2, which is more efficient than many conventional data centres. This combination of high-density capability and operational efficiency is designed to allow organisations to run AI workloads with consistent performance while managing operational costs.

Mark Lee, CEO of Deep Green, said that infrastructure availability is a common challenge raised by customers. While advancements have been made in software and GPU technology, organisations often face delays in securing suitable infrastructure. He noted that the Manchester site allows organisations to deploy high-density AI racks in weeks.

Unlike conventional facilities, the site captures waste heat generated by AI compute and repurposes it locally. The heat can be used by nearby buildings and community facilities, integrating heat reuse into the facility design and reducing the environmental impact associated with high-performance computing.

The development reflects growing demand for infrastructure capable of supporting AI workloads while also incorporating approaches aimed at improving energy efficiency and local heat reuse.

New sovereign UK inference cloud is made possible by energy-efficient SambaNova AI infrastructure.
AMD GPUs are now supported on Dell PowerEdge servers, extending on-premises AI compute capabilities.
Kao Data partners with Discover Tech, aiming to provide immersive tech sector experiences for young...
A demonstration of hydrogen-fuelled engines has been completed as part of testing for data centre...
European organisations confront a costly inefficiency in their cloud-first strategies, affecting AI...
Vantage Data Centers has appointed a new Global Chief People Officer and Chief Operating Officer...
Hyve Managed Hosting works with Red Hat to provide a platform supporting virtualised and...
Xference has introduced its European AI infrastructure in Italy, with the beta phase now open to...