Post Thumbnail

xAI supercomputer emits hazardous substances

Elon Musk’s xAI company plans to continue using 15 gas turbines to power its “Colossus” supercomputer in Memphis, Tennessee, despite serious environmental concerns. According to documents obtained by The Commercial Appeal, the company has applied for permission to operate the turbines continuously from June 2025 to June 2030 with the Shelby County Health Department.

The situation is alarming environmentalists as the 20-year-old turbines emit hazardous pollutants, including formaldehyde, in quantities exceeding the EPA’s annual limit of 10 tons for a single source. According to the operating permit data, each turbine emits 11.51 tons of hazardous substances per year, significantly exceeding permissible standards.

Of particular concern is the fact that about 22,000 people live within a five-mile zone of the facility. According to Eric Hilt, a representative of the Southern Environmental Law Center, the turbines have been operating since summer 2024 without public notification or oversight, and the submitted permits do not account for these emissions.

“This is another example of a company not being transparent with the community and local authorities,” Hilt stated in an interview with The Commercial Appeal.

The Health Department reported that the permits have not yet been approved, and “there are no established timelines for their approval.” This situation raises important questions about the balance between artificial intelligence technology development and environmental safety, as well as the need for stricter control over technology companies’ activities.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
IMF chief economist compared AI boom to dotcom bubble

IMF chief economist Pierre-Olivier Gourinchas stated that the world has already traveled halfway to a burst AI bubble and a new financial crisis.

Researchers cracked 12 AI protection systems

You know what researchers from OpenAI, Anthropic, Google DeepMind and Harvard just found out? They tried to break popular AI security systems and found a bypass almost everywhere. They checked 12 common protection approaches. From smart system prompt formulations to external filters that should catch dangerous queries.

OpenAI has 5 years to turn $13 billion into trillion

You know what position OpenAI is in now? According to Financial Times, the company has 5 years to turn 13 billion dollars into a trillion. And here's what it looks like in practice.

Sam Altman promises to return humanity to ChatGPT

OpenAI head Sam Altman made a statement after numerous offline and online protests against shutting down the GPT-4o model occurred. And then turning it on, but with a wild router. I talked about this last week in maximum detail. Direct quote from OpenAI head.

AI comes to life: Why Anthropic co-founder fears his creation

Anthropic co-founder Jack Clark published an essay that makes you uneasy. He wrote about the nature of modern artificial intelligence, and his conclusions sound like a warning.