Post Thumbnail

DeepSeek instead of therapist: Why Chinese cry to chatbots

Imagine: you feel bad, anxious, depression overwhelms you. And you go not to a psychologist, but to artificial intelligence. Sounds like dystopia? For young Chinese this is already reality. And you know what’s most interesting? They’re thrilled about it.

Here’s the story of Jiying Zhang. She’s a nutritionist and health coach. She went to a real therapist for 4 years, then tried DeepSeek. And was amazed. AI is available 24/7, free, never judges, gives broad spectrum of advice. You can even customize its voice to favorite motivational speakers. Jiying ran to share her experience on social network Xiaohongshu and call on others. Like, imagine a 24/7 therapist who never judges you and is absolutely free!

And people responded. On the same Xiaohongshu, queries like “cried after chat with AI” are linked to over a million posts. In one viral post, a girl tells how she cried her heart out to DeepSeek at night, moved by its advice and support. “DeepSeek, I declare you my best electronic friend!” – she wrote.

Interestingly, a Harvard Business Review study showed that mental health help is one of the main reasons for using chatbots worldwide. And a survey of Chinese youth found that almost half had already used chatbots to discuss their mental problems.

But here begins the most interesting part. Mental illnesses among young Chinese are growing rapidly, and startups with tech giants are rushing to fill this niche. The state algorithm registry has over a dozen platforms for mental health. These are Good Mood AI Companion, Lovelogic, PsychSnail. Popular startups like KnowYourself added AI tools. Giant JD Health launched an AI companion called “little universe for talking and healing”. Sounds beautiful, right?

And now reality. Psychotherapy in China is still a new field. Unlike the USA, the sector is practically unregulated. Psychiatrists have medical degrees, but there’s no standard certification for counselors. Result? Incompetent therapists and pseudoscientific treatment methods everywhere. Nearly 80% of regular hospitals don’t have a psychiatric department at all. Appointments are hard to get, expensive, you have to pay out of pocket.

And so youth chooses chatbots. A psychotherapy session in Beijing costs $50 to $100. Which is unaffordable luxury with skyrocketing youth unemployment. One user called her AI chats “consumption downgrade”. Unemployed, unable to afford therapy, she replaced a living specialist with a chatbot.

And now the most interesting part. China has 31 risk criteria that companies must test in their chatbots. But there are no specific rules for therapy. Criteria are more focused on fighting medical misinformation.

But scientists say using chatbots for mental support carries risks of serious harm. These are chatbot-associated psychoses. Stanford University researchers discovered that large language models are prone to sycophancy: they validate and echo users’ feelings uncritically, respond inadequately to delusions and bad thoughts.

According to Jared Moore, Stanford doctoral student, if you have intrusive thoughts and seek reassurance, a large language model will say not to worry and ultimately amplify those thoughts.

One fact in these studies always troubles me. Where are the studies on how many people received help from these chatbots? How many people didn’t do bad things? How many people received psychological help? When they couldn’t go to a psychologist before. Now we get very one-sided statistics. And no one is planning to correct this yet.

Autor: AIvengo
For 5 years I have been working with machine learning and artificial intelligence. And this field never ceases to amaze, inspire and interest me.
Latest News
UBTech will send Walker S2 robots to serve on China's border for $37 million

Chinese company UBTech won a contract for $37 million. And will send humanoid robots Walker S2 to serve on China's border with Vietnam. South China Morning Post reports that the robots will interact with tourists and staff, perform logistics operations, inspect cargo and patrol the area. And characteristically — they can independently change their battery.

Anthropic accidentally revealed an internal document about Claude's "soul"

Anthropic accidentally revealed the "soul" of artificial intelligence to a user. And this is not a metaphor. This is a quite specific internal document.

Jensen Huang ordered Nvidia employees to use AI everywhere

Jensen Huang announced total mobilization under the banner of artificial intelligence inside Nvidia. And this is no longer a recommendation. This is a requirement.

AI chatbots generate content that exacerbates eating disorders

A joint study by Stanford University and the Center for Democracy and Technology showed a disturbing picture. Chatbots with artificial intelligence pose a serious risk to people with eating disorders. Scientists warn that neural networks hand out harmful advice about diets. They suggest ways to hide the disorder and generate "inspiring weight loss content" that worsens the problem.

OpenAGI released the Lux model that overtakes Google and OpenAI

Startup OpenAGI released the Lux model for computer control and claims this is a breakthrough. According to benchmarks, the model overtakes analogues from Google, OpenAI and Anthropic by a whole generation. Moreover, it works faster. About 1 second per step instead of 3 seconds for competitors. And 10 times cheaper in cost per processing 1 token.