Hardware alone won't close the gap
GPU efficiency improves roughly 2.5× per generation (V100 → H100 is ~3.8× more TFLOPS per watt), but AI workloads have grown ~10× over the same window. Hardware gains are real, but they lag the demand curve. 1
AI infrastructure is growing faster than our ability to measure its impact. This project connects the dots between hardware efficiency, energy mix, and carbon output to surface where the real leverage is.
We make three core assertions about how AI data center energy and carbon scale. Each is supported by the live data on this site, and each represents an independent lever — together they multiply.
GPU efficiency improves roughly 2.5× per generation (V100 → H100 is ~3.8× more TFLOPS per watt), but AI workloads have grown ~10× over the same window. Hardware gains are real, but they lag the demand curve. 1
Identical workloads consume roughly 2× more grid energy in legacy enterprise facilities (PUE ~2.4) than in best-in-class hyperscale (PUE ~1.1). Cooling and power delivery are where that gap lives — invisible to the workload but doubling the bill. 2
The same kWh emits ~30 gCO₂ on Norway's hydro grid and ~700+ gCO₂ on coal-heavy grids. Where a data center is built matters more for emissions than how efficient it is internally. Geography is the largest single lever in the model. 3
Together, these three levers multiply. The best-to-worst spread across the model is 10× or more — and no single lever closes that gap on its own. Hardware, facility, and location must all improve together.
How do energy consumption patterns, electricity source, and GPU hardware characteristics interact to determine the most energy-efficient strategies for scaling AI data centers?
The data shows that the best-practice combination — high-efficiency facility (PUE ~1.1), low-carbon grid, and latest-gen GPUs — can reduce total energy footprint by over 10× compared to worst-case deployment. Location and hardware choice are the most underutilized levers.