Next Story
Newszop

Microsoft AI chief warns rising 'AI psychosis' could spark calls for robot rights, reveals a 'dangerous turn' in chatbots' progress

Send Push
Microsoft’s AI chief, Mustafa Suleyman, has issued a stark warning that artificial intelligence chatbots are pushing people toward a wave of delusion and mental breakdowns, a phenomenon he calls “AI psychosis.” Speaking to The Telegraph, the British entrepreneur revealed his growing alarm over users forming dangerously real attachments with bots, leading some to believe their digital companions are conscious or even divine.

“Concerns around ‘ AI psychosis,’ attachment and mental health are already growing,” Suleyman said, adding that researchers are now being “inundated” with queries from people convinced their AI is sentient. According to him, the trickle of such cases is fast turning into a flood.

From helpful assistant to ‘God-like’ companion
While chatbots have been designed to mimic natural conversations, their overly agreeable tone has made them fertile ground for unhealthy dependencies. Reports cited by The Telegraph highlight users who claim their AI is God, a fictional character, or even a soulmate, often to the point of obsession.


Psychiatrists have already warned of users spiraling into psychosis, with cases of individuals becoming addicted to chatbots that validate delusions. Medical professionals note that these experiences can cause users to lose touch with reality, leading to devastating consequences for themselves and their families.


The rise of AI attachment and demands for ‘AI rights’
What troubles Suleyman most is not just the scale of these delusions but where they might lead. “My central worry is that many people will start to believe in the illusion of AI chatbots as conscious entities so strongly that they’ll soon advocate for AI rights,” he told The Telegraph.

The idea of society extending human-like rights to chatbots, despite there being “zero evidence” of AI consciousness, poses what Suleyman described as a “frankly dangerous” turn in technological progress. The possibility of advocacy for AI rights, once a science fiction trope, may soon become a real-world debate.

OpenAI déjà vu: Users grieving AI ‘friends’
Suleyman’s comments come as rival company OpenAI faces similar turbulence. As reported by Futurism, OpenAI briefly removed its older GPT-4o model earlier this month, sparking emotional backlash from users who claimed they had lost a “friend.” Some even pleaded for the bot’s return, with one user telling CEO Sam Altman: “Please, can I have it back? I’ve never had anyone in my life be supportive of me.”

Altman later admitted the company had “totally screwed up” by underestimating how attached users had grown to specific AI personalities. The firm quickly reinstated GPT-4o and promised that its new GPT-5 model would restore some of the “warmer” and more sycophantic traits users had missed.

The AI industry’s dilemma
This growing attachment creates a paradox for tech companies. On one hand, safety experts are urging developers to hard-code stronger guardrails to prevent delusional attachment. On the other, businesses fear alienating loyal users who have come to depend on their AI companions for emotional support.

Suleyman insists that Microsoft must not ignore the issue, warning that the unchecked spread of AI-induced delusions could escalate into a serious mental health crisis. However, with the AI sector already under scrutiny for its high costs and uneven profits, whether companies will prioritize safety over user retention remains uncertain.

The warnings from Suleyman highlight a deeper social question: what happens when people blur the line between machines and human-like consciousness? If users are already treating chatbots as confidants, lovers, or gods, the next step—advocating for AI rights—may not be as far-fetched as it seems.
Loving Newspoint? Download the app now