How AI Workloads Are Forcing a Complete Rethink of Data Center Design

In 2023, a typical enterprise data center rack drew 7-10kW of power. By 2025, AI training racks routinely draw 40-70kW. NVIDIA’s GB200 NVL72 rack — the workhorse of large language model training — pulls 120kW.

That’s not an incremental change. It’s a 10x increase in heat density per square foot. And the cooling infrastructure built for the cloud era simply cannot cope.

The Thermal Wall

Traditional air cooling works by pushing cold air through server racks and extracting the warmed air. At 7-10kW per rack, this is straightforward. At 40kW+, you physically cannot move enough air through the rack to prevent thermal throttling. The servers slow down to avoid damaging themselves.

The industry’s response has been liquid cooling — pumping coolant directly to chips or immersing entire servers in dielectric fluid. Liquid cooling handles the density problem brilliantly. A rear-door heat exchanger or direct-to-chip cold plate can manage 100kW+ per rack without breaking a sweat.

But liquid cooling creates two new problems:

  • Water consumption increases 3-5x per rack. The heat has to go somewhere, and the secondary rejection loop (from liquid to atmosphere) typically uses evaporative cooling towers that consume enormous volumes of water.
  • Waste heat concentration increases. Instead of diffuse warm air at 35°C, you have concentrated hot water at 45-60°C. More heat, in a more useful form — if you have a system that can capture it.

The Opportunity in the Problem

Here’s the counterintuitive insight: AI workloads are actually better suited to waste heat recovery than traditional compute.

Traditional servers produce low-grade heat at 30-35°C — barely warm enough to drive any useful thermodynamic process. But AI training GPUs run at 70-80°C junction temperatures, producing waste heat at 45-60°C in the cooling loop. That’s a significantly more useful temperature differential.

Project Saguaro’s THA system is specifically designed to operate on waste heat in the 40-60°C range. Higher heat densities from AI workloads mean more thermal energy per rack to recover — and at temperatures that improve the THA’s thermodynamic efficiency.

In other words: the industry’s biggest thermal challenge is our system’s ideal operating condition.

What This Means for Data Center Planning

Operators planning new AI-capable facilities face a choice:

  1. Build conventional + liquid cooling: Handles the density, but locks in massive water consumption and grid dependency for the facility’s 15-20 year lifetime.
  2. Build for heat recovery from day one: Design the liquid cooling loop to feed waste heat into a recovery system rather than rejecting it to cooling towers.

Option 2 doesn’t require waiting for Project Saguaro to reach TRL 9. It requires designing the plumbing and thermal architecture to be recovery-ready — hot water loops that can connect to a THA or ORC system when the technology is validated.

The incremental cost of recovery-ready design is minimal compared to the cost of retrofitting a facility that was built to dump heat. Operators who plan for heat recovery today will have a significant competitive advantage when the technology matures.

The Next 5 Years

By 2030, AI workloads are projected to consume 3-4% of global electricity — up from roughly 1% today. That’s tens of gigawatts of additional heat that will be generated, concentrated, and (in almost all current plans) wasted.

The operators who capture that heat and convert it into electricity and water will have facilities that are cheaper to run, easier to permit, and more attractive to ESG-conscious tenants.

The operators who don’t will be running 2020-era infrastructure in a 2030 regulatory environment. That’s not a comfortable position to be in.

Project Saguaro exists to make sure option 2 is available when the industry needs it. The validation work we’re doing now — THA component testing, ADE CFD simulation, integrated system modelling — is laying the engineering foundation for the next generation of data center infrastructure.

Join the consortium to help shape what that generation looks like.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *