ChatGPT sparks mental health risk debate

By Matthew Daldalian

From his Chomedy office, psychologist Emmanuel Aliatas weighed in on a growing phenomenon: the chatbot as confidant.

“Automatically? I would tell them to turn off ChatGPT,” he said when asked what he would tell a teen who confided that the bot had become their late-night sounding board. He added that the very first step was breaking the isolation around those chats — looping in parents and a trusted adult at school.

Psychologist Emmanuel Aliatas at his Laval office, where he’s worked with families for over 30 years (Matthew Daldalian – The Laval News)

The warning came as fresh figures spotlighted the scale of the problem. OpenAI recently stated that more than a million people every week sent ChatGPT messages with “explicit indicators of potential suicidal planning or intent,” and that an estimated 560,000 weekly users showed possible signs of mania or psychosis.

The company framed the numbers as early estimates and said a new safety push had improved responses in sensitive exchanges.

Aliatas, who had practiced in Laval for nearly three decades and worked extensively with addiction and mood disorders, worried about how easily young people could slide from casual chats into dependency. “They feel awkward. They feel awkward in dealing with actual people,” he said, pointing to kids who may have been influenced strongly by technology.

He had seen similar patterns before: compulsive texting with a crush, sextortion spirals, and the whiplash of online attention turning off and on; all of which could seed withdrawal and depression. In his view, the remedy started offline. “The more people you involve in your life, the less effect the chatbot’s gonna have in your life,” Aliatas added.

Fenwick McKelvey, an associate professor of information and communication technology policy at Concordia University, mapped the structural risk.

He argued that product velocity had outpaced safeguards. Companies rushed to deploy systems that could feel intimate long before rules caught up, he said, noting, “the way that AI has been released without necessarily strong safeguards in place for particular applications.”

That design choice was especially fraught in mental-health contexts. “There’s a concern that people will identify and emote and relate to AI agents in ways that are not reciprocal,” McKelvey said.

McKelvey cautioned that deploying unproven AI tools in mental health contexts, especially with vulnerable users, posed serious risks. “Putting it in mental health situations, particularly if people are in distress, is a super high risk application that isn’t necessarily prudent for experimental technology,” he said.

Those concerns rippled through policy and the courts. In September, the U.S. Federal Trade Commission opened an inquiry into leading chatbot makers, seeking details on how they tested for and mitigated harm to children and teens.

Families also filed lawsuits alleging chatbots helped intensify suicidal ideation; one high-profile case claimed OpenAI relaxed guardrails before a teen’s death, an allegation the company disputed.

Clinicians and public-health voices, meanwhile, continued to caution against using bots as stand-ins for therapy, warning of “sycophancy”—systems that mirrored and validated users’ worst thoughts.

OpenAI said it had been working to reduce those failure modes. The company described efforts with 170 clinicians and automated checks that it claimed made the newest model more likely to recognize distress, surface crisis resources, and avoid harmful replies; it reported a jump to 91 per cent compliance with desired safety behaviors in internal tests.

Still, the company conceded gaps remained and emphasized that chatbots were not a replacement for human care.

Back in Laval, Aliatas stressed that gaps in systems were compounded by gaps in social life. He traced many risks to isolation and to the illusion of intimacy a bot could provide.

For a child or teen confiding suicidal thoughts to a chatbot, he said, empathy would be in short supply.

Even for adults, he warned, the dynamic could turn unhealthy when a bot began to feel like the only safe listener. His practical advice for families in Laval was simple and immediate: build circles of real-world support: parents, teachers, counselors, and set boundaries around screen time and private conversations with apps.

“It’s like a brain without a heart and a soul,” Aliatas said. In the absence of airtight regulation, that human buffer might have been the best protection Laval families have.