Free Sustainability Tool

Free AI Carbon Footprint Calculator

Your AI carbon footprint is the total energy and CO₂ emissions produced by the prompts you send to large language models. This free calculator estimates the kWh and grams of CO₂ generated by your usage of GPT-4, Claude, Gemini, and Llama - per day, per month, and per year.

6 popular modelsLive kWh + CO₂Real-world equivalentsNo signup

Step 1

Pick your model and usage

Flagship multimodal frontier model. Roughly 10x the energy of GPT-3.5.

Average daily prompts you send to this model.

Default 1,000 tokens (~750 words). Energy scales linearly with token volume.

Per query
2.9 Wh1.7 g CO₂

Step 2

Your AI footprint

Per day
145.0 Wh
87 g CO₂
Per month
4.35 kWh
2.61 kg CO₂
Per year
52.93 kWh
31.76 kg CO₂

That's equivalent to (per year)

Miles driven (gas car)
127 miles
EPA: 400 g CO₂ per mile in an average passenger vehicle
Phone charges
3,863
EPA: ~8.22 g CO₂ per smartphone charge
Trees needed to offset
1.5 trees
USDA: ~21 kg CO₂ absorbed per mature tree per year
LED bulb hours
5,293 hrs
DOE: 10 W LED uses 0.01 kWh per hour

Behind the numbers

How is this calculated?

This calculator uses published per-query energy and CO₂ estimates from peer-reviewed and industry research, scaled by your chosen model, prompts per day, and average tokens per prompt. We use a baseline of 1,000 tokens per query (~750 words) and scale linearly from there - a 2,000-token prompt produces roughly twice the energy of a 500-token one.

Per-query baselines

  • GPT-4 / GPT-4o: ~0.0029 kWh, ~1.74 g CO₂ per query
  • GPT-3.5 Turbo: ~0.0006 kWh, ~0.4 g CO₂ per query
  • Claude (Sonnet / Opus): ~0.003 kWh, ~1.8 g CO₂ per query
  • Gemini Pro: ~0.0025 kWh, ~1.5 g CO₂ per query
  • Llama 3 8B (hosted): ~0.001 kWh, ~0.6 g CO₂ per query
  • Llama 3 70B (self-hosted): ~0.005 kWh, ~3.0 g CO₂ per query

Sources

  • Stanford AI Index 2024 - per-query energy comparisons across model classes.
  • MIT Technology Review (August 2024) - the “Making an image with generative AI uses as much energy as charging your phone” coverage of frontier-model inference costs.
  • Sasha Luccioni, Yacine Jernite, Emma Strubell - “Power Hungry Processing” (Hugging Face, 2023).
  • Schwartz et al. - “Green AI” (2019, Allen Institute for AI).
  • EPA Greenhouse Gas Equivalencies Calculator - real-world CO₂ comparisons (miles driven, phone charges).

Real footprints can vary by 2-5x based on data center efficiency (PUE), the carbon intensity of the local grid, request batching, and prompt length. Treat these numbers as a useful relative comparison, not a precise audit. Sources current as of April 2026.

Good news, delivered

AI is also helping cut emissions

AI Wins covers the positive side of AI - including efficiency breakthroughs, climate research, and energy-grid optimization stories. Subscribe to get a daily digest of AI good news.

Read AI Wins

FAQ

Common questions about AI emissions

How much CO2 does ChatGPT produce per query?

A single ChatGPT (GPT-4 / GPT-4o) query is estimated at roughly 1.7 grams of CO2 and 2.9 watt-hours of electricity, based on MIT Technology Review and Hugging Face emissions research. That is about 10 times higher than a Google search and roughly 10 times higher than the older GPT-3.5 model. Estimates vary by data center efficiency, prompt length, and the carbon intensity of the local power grid.

Which AI model has the lowest carbon footprint?

Smaller hosted models like Llama 3 8B and GPT-3.5 Turbo have the lowest per-query footprint, typically under 1 gram of CO2 per request. Among frontier models, Gemini Pro tends to come out slightly ahead because Google's TPU infrastructure runs on a cleaner grid mix. The single biggest carbon-reduction lever is using a smaller model when the task does not need GPT-4 class reasoning.

How can I reduce my AI carbon footprint?

Use smaller models for simple tasks (Haiku, GPT-4o-mini, Llama 8B), cache responses for repeat queries, batch requests, write tighter prompts, and avoid unnecessary streaming or retries. If you self-host, run inference in regions with cleaner grids (Iceland, Quebec, Pacific Northwest) and keep your GPUs well-utilized. Calling a hosted API is usually greener than self-hosting at small scale because providers batch across many users.

Are these estimates accurate?

These are mid-range estimates pulled from public research (Stanford AI Index, MIT Tech Review, Hugging Face / Sasha Luccioni's work). Real footprints vary by 2-5x depending on data center PUE, grid carbon intensity, request batching, and prompt size. The numbers here are useful for relative comparisons and rough budgeting, but providers do not publish exact per-query emissions and the true value for your specific usage will differ.

Does running AI locally use more or less energy?

For most individuals and small teams, self-hosting uses more energy per query than calling a hosted API. Hosted providers batch requests across thousands of users, which keeps their GPUs at high utilization. A laptop or desktop running a single 70B model query uses the GPU for the same number of seconds, but produces only one answer instead of dozens. Self-hosting becomes greener at scale, when you can fully saturate your hardware.

Related tools

Keep exploring