Increasing Power for Growing Workloads
As demand grows, so does the requirement for hyperscale data centres to provide much more IT power. For most existing cloud platforms where computer processing units (CPUs) are used in system infrastructure, standard power requirements have typically been 5-8kW per rack with peak levels at 20kW. With AI workloads, where graphic processing units (GPUs) are used, power consumption has grown significantly, with some requirements reaching as high as 130kW per rack. Although these enormous power levels are still regarded as rare, demand at this level will become more common in the future, as the technology continues to grow and evolve.
Shifting to Liquid Cooling Methods
With such high levels of power consumption, it is critical to ensure equipment is cooled adequately. The dense power requirement of AI system infrastructure cannot be adequately cooled by conventional air-cooling methods. Therefore, there is a need to shift from transferring heat to air, to instead heat to liquid, due to the higher heat capacity property of liquids. Accommodating liquid-based cooling systems within data halls near high power infrastructure requires a considerable re-design of data centre architecture.
Site Modifications to Cater for Changes in Demand
Design modification at existing hyperscale data centre facilities will be required to meet demand. However, this can prove to be a challenge with several obstacles such as the availability of sufficient power, and local planning rules that may interfere with the installation of further equipment at some data centres. The availability of space can also prove to be a challenge, as this will need to be considered when modifying or upgrading systems on existing sites. Therefore, it won’t always be the case that existing data centres can accommodate new AI platforms.
Expanding Capacity for Denser Requirements
Whilst power densities have increased, data hall areas have remained the same size with AI nodes being built at a massive scale. Traditionally built offering 20-30MW, newer sites now need to be designed, built and efficiently operated at capacities that can deliver ideally over 70MW of IT power. With sustainability also a key strategic driver for many providers, and their wider value chain, facilities need to be located where power distribution can be met using renewable energy sources.
At Colt DCS, we are prepared to cater for the AI surge. For some time, we have been preparing for an increase in HPC requirements, and as a result have implemented a few processes, protocols and strategies accordingly:
- Adopting a hybrid cooling solution based on air and liquid cooling to manage increased heat generated from higher power rack densities. All of our new hyperscale data centres, currently in construction or that are to be developed across London, Paris, Frankfurt, Mumbai and Tokyo will be delivered using this approach.
- Using liquid cooling, we have the potential to support increased IT power loads.
- We have been building new campus sized sites with larger total power capacities. (E.g. the phase one launch of our new Mumbai hyperscale data centre in 2023 was our first facility in India, and the biggest in our entire global estate offering 134MW of IT power on completion).
- Implementing ‘active-active’ HPC architectures where backup generators are replaced with battery storage from uninterruptible power supplies (UPS) to protect critical IT loads during potential outages or failures.
- Exploring AI tools in future hyperscale data centres to optimise efficiency and reduce our impact on the environment; whilst also enhancing our global Environmental, Social and Governance (ESG) performance.
To find out more about our hyperscale data centre solutions and how we can support your growing or future planned HPC workloads, simply contact us.