Post Thumbnail

Report: ChatGPT consumes less energy than household appliances

A new study by the nonprofit institute Epoch AI has disproven the widespread belief about ChatGPT’s high energy consumption. According to the analysis, OpenAI’s popular chatbot proved to be significantly more energy-efficient than previously thought.

The study showed that an average query to ChatGPT using the latest GPT-4o model consumes only about 0.3 watt-hours of energy. This figure is ten times lower than the previously cited estimate of 3 watt-hours per query, which was widely quoted in media and compared to the energy consumption of ten Google searches.

“Energy consumption is actually insignificant compared to using regular household appliances, heating or cooling a home, or taking a car trip,” explained Joshua Yu, Epoch’s data analyst who conducted the study. According to him, previous studies were based on outdated data and assumed that OpenAI uses less efficient older-generation chips to run their models.

The discussion about AI energy consumption remains relevant against the backdrop of rapid expansion of AI companies’ infrastructure. Just a week ago, more than 100 organizations published an open letter calling on the AI industry and regulators to ensure that new data centers don’t deplete natural resources and don’t force utilities to rely on non-renewable energy sources.

This research could become an important argument in the discussion about artificial intelligence’s environmental impact, demonstrating that modern AI systems can be more energy-efficient than previously thought. However, the overall impact of the AI industry on the environment remains a subject of close attention from environmentalists and regulators.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

AI music triggers stronger emotions than human music

Have you ever wondered why one melody gives you goosebumps while another leaves you indifferent? Scientists discovered something interesting. Music created by artificial intelligence triggers more intense emotional reactions in people than compositions written by humans.

GPT-5 was hacked in 24 hours

2 independent research companies NeuralTrust and SPLX discovered critical vulnerabilities in the security system of the new model just 24 hours after GPT-5's release. For comparison, Grok-4 was hacked in 2 days, making the GPT-5 case even more alarming.

Cloudflare blocked Perplexity for 6 million hidden requests per day

Cloudflare dealt a crushing blow to Perplexity AI, blocking the search startup's access to thousands of sites. The reason? Unprecedented scale hidden scanning of web resources despite explicit prohibitions from owners!

Threats and $1 trillion don't improve neural network performance

You've surely seen these "secret tricks" for controlling neural networks. Like threats, reward promises, emotional manipulations. But do they actually work? Researchers from the University of Pennsylvania and Wharton School conducted a large-scale experiment with 5 advanced models: Gemini 1.5 Flash, Gemini 2.0 Flash, GPT-4o, GPT-4o-mini and GPT o4-mini.

Anthropic integrated Opus 4.1 into Claude Code and cloud platforms

Anthropic released Claude Opus 4.1. This isn't just another update, but a substantial improvement in coding capabilities and agent functionality. What's especially pleasing — the new version is integrated not only into the classic Claude interface, but also into the Claude Code tool. As well as available through API, Amazon Bedrock and Google Cloud Vertex AI.