Post Thumbnail

How OpenAI turned into corporate evil: the subpoena scandal

You know what’s going on in the world of artificial intelligence? While everyone admires OpenAI’s latest achievements, the company is quietly turning into the very corporate evil they supposedly fought against. And here’s a fresh example for you – a story that blew up Twitter.

So, there’s a guy named Nathan Calvin. An ordinary lawyer from a tiny non-profit organization Encode. Just 3 people in the company. They do what they try to make the artificial intelligence industry at least a little bit safer and more transparent. In particular, they promoted a California bill that was supposed to make AI giants play by fair rules: transparency, model safety, whistleblower protection. Sounds reasonable, right?

But here’s the problem – OpenAI didn’t like it at all. And here’s where it gets interesting. The company that positions itself as the savior of humanity decided to play dirty. Nathan suddenly gets a subpoena. They demand personal correspondence with California legislators, students, former OpenAI employees. Everything there is. Link in the description.

And now the funniest part – this subpoena is supposedly related to OpenAI’s lawsuit against Elon Musk! Yes, that very Musk they accuse of organizing some kind of conspiracy against them in early 2025. Using this case as a cover, OpenAI can now intimidate everyone undesirable. Moreover, the connection between Nathan’s correspondence about the bill and the case with Musk is a complete mystery. Even the judge couldn’t stand it and criticized OpenAI for abuse of procedure and excessive pressure.

Imagine the picture. 3 lawyers against a corporate machine with endless resources. This isn’t a legal process – it’s banal intimidation. And OpenAI understands this perfectly. Why argue on substance when you can just crush with mass? While the thread gets 6 million views and goes viral all over the internet, the company itself is silent. Awkward, right?

The irony is that the bill was still signed a couple of weeks ago. But OpenAI taught everyone else a lesson. Want to criticize us – get ready for a legal war. So much for the mission to create safe artificial intelligence for the benefit of all humanity. When a corporation starts strangling those who try to make it safer – it’s time to think about who the real threat is here.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
UBTech will send Walker S2 robots to serve on China's border for $37 million

Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.

Anthropic accidentally revealed an internal document about Claude's "soul"

Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.

Jensen Huang ordered Nvidia employees to use AI everywhere

Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.

AI chatbots generate content that exacerbates eating disorders

A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.

OpenAGI released the Lux model that overtakes Google and OpenAI

Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.