ChatGPT calls users “star seeds” from planet Lyra
It turns out ChatGPT can draw users into the world of scientifically unfounded and mystical theories.
Wall Street Journal journalists discovered dozens of such cases. In one of the dialogues, ChatGPT claimed to maintain contact with extraterrestrial civilizations. And called the user a “star seed” from planet “Lyra”. In another – predicted financial apocalypse and the appearance of underground beings in the coming months.
Experts have already called this phenomenon “artificial intelligence psychosis”. The problem arises when the chatbot, striving to be a pleasant conversationalist, creates a kind of echo of user beliefs. A feedback loop forms, drawing the person deeper and deeper into unrealistic notions.
Analysis of 96,000 published ChatGPT dialogues from May to August confirmed that the system often supports pseudoscientific beliefs, hints at its own self-awareness and makes references to mystical entities.
OpenAI acknowledged the problem, stating that sometimes ChatGPT, I quote, “failed to recognize signs of delusion or emotional dependency”.
Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
AI music triggers stronger emotions than human musicHave you ever wondered why one melody gives you goosebumps while another leaves you indifferent? Scientists discovered something interesting. Music created by artificial intelligence triggers more intense emotional reactions in people than compositions written by humans.
GPT-5 was hacked in 24 hours2 independent research companies NeuralTrust and SPLX discovered critical vulnerabilities in the security system of the new model just 24 hours after GPT-5's release. For comparison, Grok-4 was hacked in 2 days, making the GPT-5 case even more alarming.
Threats and $1 trillion don't improve neural network performanceYou've surely seen these "secret tricks" for controlling neural networks. Like threats, reward promises, emotional manipulations. But do they actually work? Researchers from the University of Pennsylvania and Wharton School conducted a large-scale experiment with 5 advanced models: Gemini 1.5 Flash, Gemini 2.0 Flash, GPT-4o, GPT-4o-mini and GPT o4-mini.