TheAImeters Logo

Methodology & Sources

How we estimate AI’s water, electricity, CO₂ and GPU-hours. Transparent data sources, assumptions, and update cadence.

Last updated:

Scope

We provide live estimates of selected AI activity and impact metrics. Values are indicative, designed to inform public discussion, not to replace primary reporting from operators or regulators.

Data sources

  • Disclosures from data center and cloud operators (efficiency, cooling, PUE/WUE).
  • Academic literature and independent studies on AI compute and resource use.
  • Hardware vendor info (TDP, typical utilization), training/inference workload reports.
  • National and regional grid factors (energy mix, emission factors).
  • Press releases, public filings, and reputable technical blogs.

General approach

We combine public baselines with reasonable assumptions about workload growth, utilization, and efficiency. Where ranges exist, we prefer conservative central values.

Counters refresh server-side at intervals and interpolate client-side (rate-per-second) for a live experience. Annual values start at Jan 1 of the current year; daily values at local midnight.

Water

Water estimates include data center cooling water and, when relevant, upstream water for power generation. We aggregate by workload class (training vs inference) and location (when known).

Formula (simplified)

AI water ≈ (Data center water per kWh × AI electricity) + (Power generation water intensity × AI electricity)

Where site-specific WUE is unknown, we use regional or operator medians.

See the live water counters

Electricity

Electricity use is derived from compute demand and typical utilization by workload class, adjusted by PUE where applicable.

Formula (simplified)

AI electricity ≈ (IT load × utilization × hours) × PUE

When PUE is unknown, we assume a conservative value based on recent operator disclosures.

See the live electricity counters

CO₂

CO₂e is estimated from electricity use and grid emission factors, accounting for regional mixes when available.

Formula (simplified)

AI CO₂e ≈ (AI electricity × grid emission factor)

For multi-region workloads, we apply a weighted average emission factor where data exists.

See the live CO₂ counters

GPU-hours

GPU-hours approximate aggregate accelerator time consumed by AI workloads. We combine model counts, training runs, and inference volumes with typical device-hours.

Formula (simplified)

GPU-hours ≈ Σ (number of devices × utilization × hours)

Device mix (A/H-series etc.) and utilization vary; we use cautious medians.

Updates

Server snapshots (ISR) refresh periodically; client-side counters animate every few seconds. Methodology text is reviewed and updated as new public data emerges.

Limitations

  • Uncertainty: public data is partial; we report indicative estimates rather than exact measurements.
  • System boundaries: some upstream/downstream impacts may be outside scope depending on data availability.
  • Temporal drift: newer disclosures can shift baselines; we aim to update promptly.
  • Comparability: different operators report with different scopes; we harmonize where feasible.

Ethics & transparency

We aim to inform debate with clear, sourced numbers while avoiding sensationalism. We welcome corrections and additional sources.

Contact us with corrections or sources at contact@theaimeters.com.

Share this page