Blog

  • How AI Workloads Are Forcing a Complete Rethink of Data Center Design

    In 2023, a typical enterprise data center rack drew 7-10kW of power. By 2025, AI training racks routinely draw 40-70kW. NVIDIA’s GB200 NVL72 rack — the workhorse of large language model training — pulls 120kW.

    That’s not an incremental change. It’s a 10x increase in heat density per square foot. And the cooling infrastructure built for the cloud era simply cannot cope.

    The Thermal Wall

    Traditional air cooling works by pushing cold air through server racks and extracting the warmed air. At 7-10kW per rack, this is straightforward. At 40kW+, you physically cannot move enough air through the rack to prevent thermal throttling. The servers slow down to avoid damaging themselves.

    The industry’s response has been liquid cooling — pumping coolant directly to chips or immersing entire servers in dielectric fluid. Liquid cooling handles the density problem brilliantly. A rear-door heat exchanger or direct-to-chip cold plate can manage 100kW+ per rack without breaking a sweat.

    But liquid cooling creates two new problems:

    • Water consumption increases 3-5x per rack. The heat has to go somewhere, and the secondary rejection loop (from liquid to atmosphere) typically uses evaporative cooling towers that consume enormous volumes of water.
    • Waste heat concentration increases. Instead of diffuse warm air at 35°C, you have concentrated hot water at 45-60°C. More heat, in a more useful form — if you have a system that can capture it.

    The Opportunity in the Problem

    Here’s the counterintuitive insight: AI workloads are actually better suited to waste heat recovery than traditional compute.

    Traditional servers produce low-grade heat at 30-35°C — barely warm enough to drive any useful thermodynamic process. But AI training GPUs run at 70-80°C junction temperatures, producing waste heat at 45-60°C in the cooling loop. That’s a significantly more useful temperature differential.

    Project Saguaro’s THA system is specifically designed to operate on waste heat in the 40-60°C range. Higher heat densities from AI workloads mean more thermal energy per rack to recover — and at temperatures that improve the THA’s thermodynamic efficiency.

    In other words: the industry’s biggest thermal challenge is our system’s ideal operating condition.

    What This Means for Data Center Planning

    Operators planning new AI-capable facilities face a choice:

    1. Build conventional + liquid cooling: Handles the density, but locks in massive water consumption and grid dependency for the facility’s 15-20 year lifetime.
    2. Build for heat recovery from day one: Design the liquid cooling loop to feed waste heat into a recovery system rather than rejecting it to cooling towers.

    Option 2 doesn’t require waiting for Project Saguaro to reach TRL 9. It requires designing the plumbing and thermal architecture to be recovery-ready — hot water loops that can connect to a THA or ORC system when the technology is validated.

    The incremental cost of recovery-ready design is minimal compared to the cost of retrofitting a facility that was built to dump heat. Operators who plan for heat recovery today will have a significant competitive advantage when the technology matures.

    The Next 5 Years

    By 2030, AI workloads are projected to consume 3-4% of global electricity — up from roughly 1% today. That’s tens of gigawatts of additional heat that will be generated, concentrated, and (in almost all current plans) wasted.

    The operators who capture that heat and convert it into electricity and water will have facilities that are cheaper to run, easier to permit, and more attractive to ESG-conscious tenants.

    The operators who don’t will be running 2020-era infrastructure in a 2030 regulatory environment. That’s not a comfortable position to be in.

    Project Saguaro exists to make sure option 2 is available when the industry needs it. The validation work we’re doing now — THA component testing, ADE CFD simulation, integrated system modelling — is laying the engineering foundation for the next generation of data center infrastructure.

    Join the consortium to help shape what that generation looks like.

  • TRL Explained: How Deep Tech Goes From Concept to Commercial Deployment

    When we say Project Saguaro’s technologies are at TRL 2-4, what does that actually mean? Technology Readiness Levels (TRLs) are a 9-point scale originally developed by NASA to assess the maturity of new technologies. They’re now standard across aerospace, energy, defence, and increasingly, the data center industry.

    The Scale

    TRL 1 — Basic Principles Observed. The fundamental science is understood. For a waste heat recovery system, this means the thermodynamic principles of adsorption, phase change, and pressure generation are known. Status: Complete.

    TRL 2 — Technology Concept Formulated. A specific application has been identified. “We can use waste heat to drive an adsorption cycle that generates hydraulic pressure.” Status: Complete for both THA and ADE.

    TRL 3 — Proof of Concept. The critical function has been demonstrated analytically or experimentally. Key parameters have been modelled. Status: THA is here — analytical models show the pressure pathway is feasible. ADE requires CFD validation.

    TRL 4 — Component Validation in Lab. Individual components work in a laboratory setting. MOF sorbent beds adsorb and desorb at target rates. Pressure vessels hold target pressures. Status: THA is approaching this — component testing planned for 2026.

    TRL 5 — Component Validation in Relevant Environment. Components work outside the lab, in conditions approximating the real deployment environment.

    TRL 6 — System Demonstration in Relevant Environment. A complete prototype system operates in near-real conditions. This is where integrated THA+ADE testing would occur.

    TRL 7 — System Prototype in Operational Environment. The system works at or near full scale in an actual data center environment.

    TRL 8 — System Complete and Qualified. The technology has been proven to work in its final form under expected conditions.

    TRL 9 — Operational. Full commercial deployment.

    Where Project Saguaro Sits

    The Thermo-Hydraulic Amplifier (THA) is at TRL 3-4. The concept is proven analytically. The next step is component-level validation — testing individual subsystems (MOF beds, pressure vessels, hydraulic motors) under controlled conditions.

    The Atmospheric Density Engine (ADE) is at TRL 2-3. The concept is formulated and the physics are sound, but CFD simulation and scaled demonstrator testing are needed to validate the buoyancy-driven airflow assumptions.

    Why This Matters for Partners

    TRL 2-4 is where deep tech is most capital-efficient to join. The science risk is largely retired (we know the physics works). What remains is engineering risk — can we build it at the target specs, at the target cost?

    By TRL 6-7, the technology is proven but the early-mover advantage is gone. Licensing costs increase. Steering committee seats are taken. The founding consortium window closes.

    Project Saguaro is deliberately structured as a consortium precisely because TRL 3-4 validation requires shared investment and shared risk. No single organisation should fund deep tech validation alone — and no single organisation needs to.

    The 2026-2029 Roadmap

    • 2026: Phase 1 — Component validation (THA modules, ADE CFD + scaled demonstrator)
    • 2027-2028: Phase 2 — Integrated prototype testing (coupled THA+ADE system)
    • 2028+: Phase 3 — Pilot deployment at selected site with audit-grade metering
    • 2029+: Commercial licensing based on validated pilot performance

    Each phase has defined go/no-go gates. If a subsystem doesn’t meet targets, we know before committing to the next phase. That’s responsible deep tech development — and it’s why consortium partners can invest with confidence.

  • Waste Heat Recovery: The Data Center Industry’s $25 Billion Blind Spot

    Every watt of electricity consumed by a server is eventually converted into heat. A 100MW data center produces roughly 35MW of recoverable waste heat — enough thermal energy to heat 10,000 homes or generate several megawatts of electricity.

    Globally, data centers reject an estimated 50-60 TWh of waste heat per year. At current industrial heat prices, that represents roughly $25 billion in untapped thermal energy — simply vented into the atmosphere through cooling towers and heat exchangers.

    Why Nobody Captures It

    The challenge isn’t thermodynamic — it’s economic and logistical.

    Data center waste heat is “low-grade” — typically 40-60°C. That’s too cool for most industrial processes, which need heat above 100°C. It’s warm enough for district heating, but that requires proximity to residential networks and complex off-take agreements with local authorities.

    A handful of operators have made it work. Stockholm Data Parks pipes waste heat from facilities to the city’s district heating network. Facebook’s Odense data center in Denmark supplies heat to 6,900 homes. Amazon has a similar arrangement in Dublin.

    But these are exceptions. Globally, less than 2% of data center waste heat is recovered for any productive purpose.

    The Temperature Problem

    The fundamental challenge is temperature uplift. To generate electricity from 40-60°C waste heat, you need a thermodynamic cycle that can extract useful work from a small temperature differential. Organic Rankine Cycle (ORC) systems can do this, but at low thermal-to-electric efficiencies of 5-12% — often not enough to justify the capital expenditure.

    Project Saguaro’s Thermo-Hydraulic Amplifier (THA) takes a different approach. Rather than trying to run a turbine from low-grade heat, the THA uses waste heat to drive an adsorption-regeneration cycle that converts atmospheric moisture into high-pressure hydraulic energy.

    The concept: waste heat at 40-60°C drives moisture desorption from a Metal-Organic Framework (MOF) sorbent bed. The released water vapour is compressed through the regeneration cycle to 50-100 bar — pressure that can drive a hydraulic motor to generate electricity, while simultaneously producing distilled water as a byproduct.

    From Waste to Resource

    If the THA design targets are validated at pilot scale, the economics shift dramatically:

    • Revenue from electricity: Self-generated power displaces grid purchases at $0.15-0.25/kWh
    • Revenue from water: Produced water can supply the facility’s own cooling needs or be sold
    • Avoided carbon costs: As carbon pricing rises (currently ~$90/tCO2 in the EU ETS), avoiding grid electricity avoids the embedded carbon cost
    • Planning advantage: Facilities that don’t draw grid power or mains water face fewer planning objections

    Validation Status

    The THA system is currently at TRL 3-4. The key validation milestones for 2026 include:

    • Working fluid and pressure pathway demonstration at bench scale
    • MOF adsorption bed capacity and kinetics under real humidity conditions
    • Condenser duty vs. sink temperature envelope mapping
    • Net energy balance at skid scale (target: positive net energy output)

    These are hard engineering challenges, not theoretical ones. The thermodynamics are well-understood. The question is whether the system can achieve the target pressures and flow rates at a cost that makes commercial sense.

    We believe it can. The consortium validation program is designed to prove it.

  • The Data Center Water Crisis Nobody Is Talking About

    In 2024, Google’s data centers consumed 6.1 billion gallons of water. Microsoft used 7.8 billion gallons. Meta’s facilities in Arizona and New Mexico drew water from aquifers that are already critically depleted.

    These numbers are growing at 20-30% year-on-year, driven almost entirely by AI training and inference workloads that generate significantly more heat per rack than traditional compute.

    Why Data Centers Need So Much Water

    Most large data centers use evaporative cooling — essentially, they spray water across heat exchangers to absorb waste heat through evaporation. It’s efficient at removing heat, but it’s fundamentally consumptive. The water doesn’t come back.

    A typical 100MW facility consumes 3-5 million gallons of water per day. That’s equivalent to the daily water consumption of a city of 50,000 people.

    Liquid cooling — often presented as the solution to AI-density thermal challenges — actually makes the problem worse. Direct-to-chip liquid cooling handles higher heat densities brilliantly, but it requires 3-5x more water per rack than traditional air cooling once you account for the secondary rejection loop.

    The Regulatory Squeeze

    Regulators are waking up. The Netherlands imposed a moratorium on new data center construction in Amsterdam partly due to water and power concerns. Singapore has capped data center capacity. Ireland — Europe’s data center capital — has flagged grid and water constraints.

    In the UK, the Environment Agency has warned that several regions face water stress. Thames Water supplies water to most of London’s data center cluster. Southern Water serves the growing Hampshire corridor. Both utilities are under severe financial and operational pressure.

    Planning permissions for new facilities increasingly require Water Usage Effectiveness (WUE) commitments. But WUE, like PUE, only measures how efficiently you consume water — not whether you should be consuming it at all.

    The Atmospheric Alternative

    What if a data center could produce water instead of consuming it?

    The atmosphere contains approximately 12,900 cubic kilometres of water vapour at any given time. Atmospheric water harvesting — extracting moisture from air — is well-established technology in arid regions for drinking water.

    Project Saguaro’s Atmospheric Density Engine (ADE) takes this principle and integrates it directly into the data center cooling loop. By using subterranean convection driven by waste heat differentials, the system targets net water production — meaning the facility would discharge more clean water than it consumes.

    At TRL 2-3, this is still in the validation phase. The critical unknowns include buoyancy budget calculations, pressure-loss profiles, and seasonal variability in atmospheric moisture content. But the thermodynamic basis is sound: where there is waste heat and atmospheric moisture, there is recoverable water.

    What Happens If We Don’t Act

    AI workloads are projected to consume 4.2-6.6 billion gallons of water globally by 2027 — and that’s a conservative estimate based on current growth trajectories. As GPT-scale models become standard enterprise infrastructure rather than research curiosities, every major cloud region will face water competition between data centers and communities.

    The data center industry has two choices: fight for water allocations against hospitals, farms, and households — or develop infrastructure that doesn’t need water from the mains at all.

    Project Saguaro is pursuing the second option.

  • Why PUE Is the Wrong Metric for Sustainable Data Centers

    Power Usage Effectiveness has been the data center industry’s go-to sustainability metric since 2006. A PUE of 1.0 means perfect efficiency — every watt goes to compute, none to overhead. Google proudly reports a fleet-wide PUE of 1.10. Equinix targets 1.25. The industry average sits around 1.58.

    But PUE has a fundamental blind spot: it says nothing about what happens to the energy after it’s used.

    The Efficiency Trap

    A data center with a PUE of 1.10 is extraordinarily efficient at converting grid electricity into compute. But it still draws 100% of its power from the grid. It still rejects 100% of its waste heat into the atmosphere. It still consumes millions of litres of water for cooling.

    PUE measures how well you use energy. It doesn’t measure whether you should be using that energy at all.

    What We Should Be Measuring Instead

    A truly sustainable data center metric needs to account for three things:

    • Net energy position: How much grid power does the facility actually draw, after accounting for any energy it generates?
    • Net water position: Does the facility consume water, or does it produce it?
    • Waste heat utilisation: Is rejected heat captured for productive use, or simply dumped?

    This is why Project Saguaro measures Net Resource Impact (NRI) rather than PUE. NRI captures the full resource lifecycle — energy in, energy generated, water consumed, water produced, heat rejected, heat recovered.

    The Numbers That Matter

    Consider a conventional 100MW facility:

    • Grid draw: 150MW (PUE 1.5)
    • Waste heat rejected: 35MW thermal
    • Water consumed: 3-5 million gallons/day
    • Net energy position: -150MW

    Now consider the same facility with integrated THA waste heat recovery and ADE atmospheric water harvesting (design targets, pending validation):

    • Grid draw: 7.5MW (95% reduction target)
    • Waste heat recovered: 35MW thermal converted to electricity + distilled water
    • Water produced: +300,000 litres/day net surplus
    • Net energy position: approaching neutral

    Both facilities could report excellent PUE numbers. Only one is approaching genuine sustainability.

    The Industry Is Starting to Notice

    The EU Energy Efficiency Directive now requires data centers above 500kW to report energy performance annually. The UK’s Climate Change Committee has flagged data center water consumption as a growing concern. Investors increasingly demand ESG metrics that go beyond simple efficiency ratios.

    PUE served the industry well for nearly two decades. But as we approach the physical limits of efficiency optimisation, the question is no longer “how efficiently do we use resources?” but “can we give back more than we take?”

    That’s the question Project Saguaro was designed to answer.

  • Analysis: Towering South Asia: India’s digital leap

    Towering South Asia: India’s Digital Leap

    As I catch up on industry news, a statistic jumped out at me: **India is home to over 40% of the world’s data center capacity growth**. That got me thinking – what’s driving this rapid expansion? A recent article in Data Center Dynamics, “Towering South Asia: India’s Digital Leap”, shed light on the trends shaping India’s data center market.

    According to the article, India is shifting from serving telecom giants to hyperscalers, prioritizing space-efficient design and green power to meet growing demand beyond Tier 1. This shift is driven by a surge in cloud adoption, e-commerce growth, and increasing reliance on digital services.

    The Real Challenge

    As data center operators, we’re faced with the daunting task of meeting this exploding demand while minimizing our environmental footprint. The article highlights the pressure to adopt sustainable practices, from reducing energy consumption to utilizing green power sources. But what does this mean for us on the ground?

    Take space efficiency, for instance. As hyperscalers continue to drive growth, we need to optimize our designs to accommodate more capacity in existing facilities or build new ones that are inherently efficient. This means rethinking traditional approaches like water cooling and exploring innovative solutions like subterranean cooling.

    Our Approach

    At Project Saguaro, we’re tackling this challenge head-on by developing integrated solutions that reduce our carbon footprint while increasing operational efficiency. Our approach is centered around two key technologies: THA (waste heat recovery) and ADE (subterranean cooling). By combining these innovations, we aim to create a fully integrated symbiotic system that sets new standards for sustainability.

    We’re targeting net-positive water production (**300k L/day**) and **95% grid independence**, though these need validation through our consortium’s rigorous testing and validation process. What makes us different is our commitment to a holistic approach, unlike some competitors who focus on heat-to-power or water recycling separately.

    Join Us

    If you’re as excited about the potential for net-positive data centers as we are, join the conversation! Learn more about Project Saguaro and our consortium’s efforts to validate innovative solutions that can transform the industry. Visit our website or reach out to us at consortium@netpositivedatacenters.org. Let’s work together to create a more sustainable future for data centers.