Technology

AI’s Power Demand Was Set to Explode — Then DeepSeek Changed Everything. Or Did It?

Published January 31, 2025

The future of artificial intelligence (AI) was expected to be energy-intensive. As systems like ChatGPT, Gemini, and Claude evolved, their energy demands began to escalate. This surge posed a challenge for power supply, particularly in the U.S., where projections indicated that AI might consume up to 25% of the country’s electricity by 2030. To manage this anticipated demand, plans for new fossil fuel and nuclear power plants were initiated.

Utilities were already preparing for this increase by planning new power facilities. However, everything changed with the arrival of DeepSeek, a Chinese AI company that claimed to have developed a far more efficient model for AI performance.

DeepSeek’s major innovation was achieving results similar to those of its Western competitors while utilizing significantly less energy—between 10 to 40 times less. This unexpected revelation sent shockwaves through financial markets, causing utility stocks to plummet and leading to substantial losses for chipmakers like Nvidia.

Did DeepSeek just pull the plug on AI’s energy crisis?

This raised an important question: have we widely overestimated AI’s energy requirements? Historically, scaling AI appeared to be a brute force matter—essentially believing that larger models demand proportionally more energy. Data centers already account for about 4.4% of U.S. electricity usage, and this is projected to spiral upwards.

Countless analysts had anticipated a significant jump in energy consumption, predicting that AI usage might add hundreds of terawatt-hours to electricity demand, enough to power millions of homes. In response, utilities and major tech companies raced to secure energy contracts and initiate projects for new power plants.

Then came DeepSeek’s ground-breaking news. If companies could achieve AI efficiencies dramatically, then the urgent demand for expanded power generation might not be needed after all.

DeepSeek claimed to have trained its AI model for just $5.6 million, a fraction of the exorbitant costs reported by other companies like OpenAI, Google, and Anthropic. Although skeptics question the accuracy of DeepSeek’s claims, their models have been generating a buzz in China for some time.

Where Does the Innovation Come From?

When DeepSeek launched its latest model, DeepSeek-V2, in May 2024, the technology showcased high performance at a low cost, setting off a price competition among prominent Chinese tech firms, including ByteDance and Tencent. While they struggled with profit margins, DeepSeek purportedly remained successful.

The success lies in DeepSeek's efficiency-first strategy. Rather than relying on numerous high-powered GPUs for AI training, DeepSeek optimized its model to run on older, less powerful Nvidia chips, which are the only ones legally accessible to Chinese companies due to U.S. trade restrictions.

This shift challenged long-standing beliefs about AI training processes. It suggested a path to operate AI at significantly lower energy costs instead of uncontrollable consumption increases.

However, while this presents a potential decline in energy demand per model, it does not imply that overall demand for AI will decrease. In fact, it might lead to the opposite.

A classic paradox

This scenario exemplifies the Jevons paradox, where improvement in efficiency results in an overall rise in consumption. As AI becomes more affordable, its usage might proliferate, similar to how adding a lane to a highway can exacerbate congestion due to increased traffic.

AI's scalability means that enhancing efficiency may lead to broader access, potentially generating more users and, consequently, more AI queries. After increased efficiency in computer chips, energy use soared as computers became commonplace.

This uncertainty persists, yet energy companies are steadfast in their commitment to AI-oriented power strategies. Tech giants like Microsoft, Meta, and Oracle continue to build expansive data centers and are not slowing their investments.

Good or Bad News For the Planet?

Despite DeepSeek’s claims raising doubts, utility companies are still moving forward. Many energy sectors were initially planning extensive investments for new power plants and infrastructure based on inflated AI energy use forecasts—projections that now face reevaluation.

Some analysts have pointed towards an AI bubble, drawing comparisons to the dot-com boom of 25 years ago when telecom firms invested heavily amid unrealistic demand expectations. Today’s utilities, chipmakers, and data developers are witnessing the same rush to accommodate AI growth, with DeepSeek's efficiency disrupting the narrative.

It remains uncertain what the future holds. While AI will undoubtedly demand energy, the extent is unclear. The journey regarding AI’s energy consumption is not concluding—it is merely shifting into a new chapter. DeepSeek may have shaken up the AI power landscape, but whether it signifies the start of a transformation or merely a temporary setback remains to be determined.

AI, energy, DeepSeek, Jevons, market