Powering Intelligence:
Energy & AI Data Centers

Text will go here.

Sources: Text will go here.

Global Data Center Electricity Consumption by Region

Text will go here.

Region 2022 (TWh) 2024 (TWh) 2030 Projected (TWh) 2024–2030 Growth Share of Global (2024) Primary Energy Mix Carbon Intensity
United States 130 183 426 +133% ~45% Gas 41%, Nuclear 20% High
China 88 104 279 +168% ~25% Coal-heavy mix Very High
Europe (EU+UK) 56 70 115 +64% ~15% Renewables-leading Medium
Japan 14 19 34 +79% ~5% Gas & Nuclear mix High
Rest of World 40 39 91 +133% ~10% Mixed Varies
Global Total 460 415 945 +128% 100% Gas 40%+ projected High

Sources: Text will go here.


GPU Hardware Efficiency Comparison

Text will go here.

GPU Architecture Release Year TDP (Watts) FP16 TFLOPS VRAM (GB) Mem. Bandwidth (TB/s) TFLOPS/Watt Use Case Tier
V100 SXM2 Volta 2017 300 112 32 0.90 0.37 Data Center
RTX 3090 Ampere (consumer) 2020 350 142 24 0.94 0.41 Prosumer
A100 SXM4 Ampere 2020 400 312 80 2.00 0.78 Data Center
RTX 4090 Ada Lovelace (consumer) 2022 450 330 24 1.01 0.73 Prosumer / Edge
H100 SXM5 Hopper 2022 700 990 80 3.35 1.41 Data Center (AI)
H200 SXM Hopper 2024 700 990 141 4.80 1.41 Data Center (AI)

Sources: Text will go here.

The Data Center Electricity Surge, 2017–2030

In the final version, hovering over any point on the chart will display a tooltip showing the exact TWh value for each region and the running global total for that year. Clicking a region in the legend will toggle that layer on or off, allowing comparison of individual contributors. A slider will let users switch between the IEA base case, headwinds, and lift-off projection scenarios, animating the chart between the three forecast ranges.

TWh · IEA base case projection

United States China Europe Japan Rest of World Projected (post-2024)
Global data center electricity: 200 TWh (2017) to 945 TWh (2030 projected).

Sources: Text will go here.

GPU Efficiency vs. Power Draw — The Hardware Landscape

In the final version, hovering over any bubble will display a tooltip with the GPU's full specs — TDP, FP16 TFLOPS, memory bandwidth, VRAM, and release year. Bubbles will be filterable by GPU tier (data center vs. consumer) using toggle buttons above the chart, and an annotation line will appear on hover showing the efficiency-per-watt tradeoff between the selected GPU and the H100 baseline.

X axis: TDP (watts) · Y axis: FP16 TFLOPS per watt · Bubble size: memory bandwidth

Data center GPU Consumer / prosumer GPU
H100 and H200 lead on efficiency at 1.41 TFLOPS/W.

Sources: Text will go here.