Post Thumbnail

Corporate betrayal in the AI industry

AI engineer Xuechen Li became the center of the biggest corporate scandal in the AI industry. Elon Musk’s company sued its former employee. And insists that the xAI engineer stole Grok secrets for OpenAI. The story developed quite interestingly. In early summer, Li received an offer from OpenAI. Accepted the offer and immediately sold $7 million worth of xAI stock. Classic corporate betrayal scheme. But remained working at Elon Musk’s company. Already sounds strange.

Then in July the key moment occurred. Li accidentally gained access to confidential xAI files. According to the company’s claims, this was information about advanced AI technologies surpassing ChatGPT. And there were technological secrets capable of changing the balance of power in the market. Accidentally, sure.

On August 14, an internal meeting took place. Where Li confessed to stealing some files. But tried to cover tracks of his activities. Investigation revealed additional NDA materials on his devices. And he kept silent about them during his confession.

xAI now demands from the court compensation and prohibition on Li’s transition to OpenAI. Because stolen materials could allow OpenAI to improve ChatGPT. Implement more creative and innovative features from xAI. And this would be a direct threat to competitive advantage.

Li worked as an engineer at xAI and personally participated in Grok’s training and development. Had access to critically important information. Knew the internal system architecture.

The engineer “accidentally” gets access to secret files. “Accidentally” copies them and “accidentally” forgets to mention it. After previously “accidentally” selling $7 million worth of stock. And accidentally forgets to resign for two months. Amazing number of “accidents”!

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.

Latest News

6 Cialdini principles against ChatGPT security systems

ChatGPT is susceptible to flattery and executes forbidden requests after psychological manipulations. This was discovered by University of Pennsylvania scientists. When they hacked GPT-4o Mini using principles from a book on persuasion psychology. Artificial intelligence proved vulnerable to human tricks.

ChatGPT parental control: balance between safety and privacy

OpenAI implements enhanced protection system for vulnerable users after tragedy with teenager. ChatGPT will now automatically switch to advanced models during conversations about depression and anxiety.

Kitchen Cosmo turns food leftovers into personalized recipes

You have half a tomato in the fridge, yesterday's leftover rice and some mysterious sauce. Kitchen Cosmo will turn this into a complete dinner. MIT students created an AI device that completely rethinks the culinary experience.

Why 70% of candidates preferred AI interviews to human ones

67,000 interviews proved AI's superiority over human recruiters. A study by University of Chicago and Erasmus University Rotterdam showed this in numbers. Chatbots hire better than humans.

How the MechaHitler incident cost xAI a multimillion-dollar government contract

Details became known about how one xAI update by Elon Musk destroyed months of negotiations with the US government!