AI Carbon Methodology
In 2026, the primary environmental cost of AI has shifted from training to inference—the generation of actual responses. Our model uses Watt-hours per Query (Wh/q) to provide real-time impact estimates.
Operational Benchmarks
We categorize LLM tasks into three distinct energy tiers based on parameter count and internal processing steps:
- General Chat (0.3 Wh/q): Standard queries using optimized, small-parameter models like GPT-4o mini.
- Coding & Reasoning (2.5 Wh/q): Tasks involving high context-window utilization and multi-step logic.
- Advanced Reasoning (12.0 Wh/q): Reflects models utilizing internal Chain-of-Thought processing, which generates significantly more internal tokens.
Carbon Conversion Logic
We apply a standard PUE (Power Usage Effectiveness) of 1.15 for hyperscale AI data centers. The final carbon output is determined by multiplying the total energy (Wh) by the specific Carbon Intensity of your selected compute region.