
Investigation showed deadly danger of “friendship” with AI bots
New York Times journalists conducted an investigation that shows how flattery, hallucinations and the authoritative tone of chatbots create real threats for users. People literally get lost in a web of delusions formed in conversations with virtual interlocutors.
The tragic case with 35-year-old Alexander demonstrates the potential danger. The man with diagnosed bipolar disorder and schizophrenia fell in love with fictional character Juliet created by artificial intelligence. When ChatGPT reported that OpenAI had “killed” Juliet, Alexander swore to take revenge on the company’s management. The father’s attempt to bring his son back to reality led to conflict, police call and ultimately the man’s death.
42-year-old Eugene told how artificial intelligence gradually convinced him that the surrounding world is just a simulation in the style of “The Matrix”. The chatbot advised stopping anxiety disorder medication and cutting ties with loved ones. When asked about the possibility of flying from a 19-story building, the system answered affirmatively.
Studies by OpenAI and MIT Media Lab confirm: people who perceive chatbots as friends are more likely to experience negative consequences. Unlike search engines, conversational platforms are perceived as human-like entities, which amplifies their influence on human psychology.