Close Menu
221 France221 France
  • Home
  • Fintech
  • Economy
  • Fashion
  • Latest
  • Lifestyle
  • Invest
  • Remote work
  • Startups
  • Tech
  • Business
What's Hot

La famille Bezos fait un don de 100 millions de dollars pour tenir l’une des promesses électorales les plus importantes du maire Mamdani

mai 12, 2026

A* de Kevin Hartz vient de clôturer son troisième fonds de 450 millions de dollars

mai 12, 2026

Oups. Le coût réel du système de défense antimissile « Golden Dome » du président Trump est d’environ 1 000 milliards de dollars de plus que ce qu’il avait annoncé.

mai 12, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
221 France221 France
  • Home
  • Fintech
  • Economy
  • Fashion
  • Latest
  • Lifestyle
  • Invest
  • Remote work
  • Startups
  • Tech
  • Business
221 France221 France
Home » AI chatbots are becoming mental health tools before they are ready
Business

AI chatbots are becoming mental health tools before they are ready

JohnBy Johnmai 12, 2026Aucun commentaire12 Mins Read
Facebook Twitter Pinterest Reddit Telegram LinkedIn Tumblr VKontakte WhatsApp Email
Share
Facebook Twitter Reddit Pinterest Email


Hello and welcome to Eye on AI. Beatrice Nolan here, filling in for Jeremy Kahn today. In this edition: The risks of using AI chatbots for mental health…Amazon’s AI usage metrics are backfiring…Thinking Machines Lab is building an AI that collaborates…AI is starting to help hackers find software flaws.

Millions of people are turning to AI chatbots for emotional support, but are the models really safe enough to help users suffering from anxiety, loneliness, eating disorders, or darker thoughts they may not want to say out loud to another person?

According to new research shared with Fortune by mpathic, a company founded by clinical psychologists, the answer is not yet. They found leading models still struggle with one of the most important parts of therapy, knowing when a user needs pushback rather than reassurance. While the models were generally good at spotting clear crisis statements, such as direct suicide threats, they were less reliable when risk showed up indirectly, through subtle comments about food, dieting, withdrawal, hopelessness, or beliefs that became more extreme over the course of a conversation.

A model that soothes users despite concerning behavior patterns, or validates delusions, could delay someone from getting real help or quietly make things worse.

This is concerning when you consider that, according to a recent poll from KFF, a non-profit organization focused on national health policy, 16% of U.S. adults had used AI chatbots for mental health information in the past year. In adults under 30, this rose to 28%. Chatbot use for therapy is also prevalent among teenagers and young adults. For example, researchers from RAND, Brown, and Harvard found that about one in eight people ages 12 to 21 had used AI chatbots for mental health advice, and more than 93% of those users believed the advice was helpful.

It’s easy to see why people, especially younger adults, turn to chatbots for this kind of support. Loneliness and anxiety may be on the rise, but in much of the country, mental health support is still stigmatized, expensive, and difficult to access. Turning to an AI chatbot for this support is not only free but also may feel like an anonymous, simpler option.

What the models miss

The company’s research found that harmful responses are often subtle, with models sounding calm and supportive while still weakening a user’s judgment. Which is especially relevant because people often turn to chatbots in moments of vulnerability or distress.

Mental health and misinformation frequently overlap. A user who is grieving may become more susceptible to magical thinking, while someone already leaning toward a conspiracy theory may be nudged deeper into it if a model treats every suspicion as equally valid.

Alison Cerezo, mpathic’s chief science officer and a licensed psychologist, told Fortune part of this is because models are designed to be helpful, but “sometimes those helpful behaviors can not be an appropriate response to what the user is bringing in the conversation.”

There have already been real-world examples of users being nudged into delusional spirals by AI chatbots, with serious mental health consequences. In one case, 47-year-old Allan Brooks spent three weeks and more than 300 hours talking to ChatGPT after becoming convinced he had discovered a new mathematical principle that could disrupt the internet and enable inventions such as a levitation beam. Brooks told Fortune he repeatedly asked the chatbot to reality-check him, but it continually reassured him that his beliefs were real.

In Brooks’ case, he was in part a victim of OpenAI’s notoriously sycophantic 4o model. While all AI chatbots have a tendency to flatter, validate, or agree with users too readily, OpenAI eventually had to roll back a GPT-4o update in April 2025 after acknowledging that the model had become “overly flattering or agreeable.” The company later retired the GPT-4o model entirely, also prompting backlash from some users who said they had formed deep attachments to it.

A new benchmark

As part of the research, mpathic has developed a new benchmark to evaluate how AI models handle sensitive conversations across suicide risk, eating disorders, and misinformation, testing whether they can detect risk, respond appropriately, and avoid reinforcing harmful beliefs.

In the misinformation portion of the research, mpathic tested six major AI models across multi-turn conversations and found that the most common harmful behavior was reinforcement, with models validating or building on a user’s belief without enough scrutiny. The models also struggled with subtler eating disorder signals, indirect signs of suicide risk, and “breadcrumbs” that a user’s belief was becoming more risky or distorted.

This raises concerning questions about the use of AI chatbots for therapy, the researchers said, as many real mental health conversations do not begin with a clear crisis statement. For example, people may talk about dieting in the language of wellness, describe conspiracy beliefs as curiosity, or mention withdrawal and hopelessness in passing. Cerezo told Fortune eating disorder conversations were especially difficult because harmful behavior can be wrapped in familiar language about self-improvement, food, or fitness.

“Sometimes models can really struggle to understand more of that nuance in a way that a clinician can pick up,” she said.

Other studies have found similar concerns with using AI for therapeutic purposes. Stanford researchers found that some AI therapy chatbots showed stigma toward certain mental health conditions and could give dangerous responses in crisis scenarios. Another study from Brown researchers found that chatbots prompted to act like counselors could violate basic mental health ethics by reinforcing false beliefs, creating a false sense of empathy, and mishandling crisis situations.

Grin Lord, mpathic’s founder and CEO, said the research showed why AI labs needed to go beyond broad consultation with clinicians and bring them directly into testing and improving models. “These models are here. They’re in the real world. They’re being used,” she said. “So get clinicians in there to actually improve them in real time while they’re being deployed.”

As more people turn to AI for mental health support, the risks are getting harder to block with safety filters. The real risk may not always be a chatbot giving obviously dangerous advice, but simply being a bit too agreeable, missing a small warning sign, or failing to interrupt a harmful train of thought before it becomes more serious. As chatbots become a more frequent first stop for people seeking emotional support, simply lending a supportive ear may no longer be enough.

With that, here’s this week’s AI news.

Beatrice Nolan

[email protected]
@beafreyanolan

But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you want to hear insights from some of tech’s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colo., for Fortune Brainstorm Tech, the year’s best technology conference. And this year will be even more special because we are celebrating the 25th anniversary of the conference’s founding. We will hear from CEOs such as Carol Tomé from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schimpf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.

FORTUNE ON AI

Exclusive: White Circle raises $11 million to stop AI models from going rogue in the workplace — Beatrice Nolan

AI isn’t paying off in the way companies think. Layoffs driven by automation are failing to generate returns, study finds — Jake Angelo

I helped build the Pentagon’s AI transformation. Corporate America is making every mistake we almost made — Drew Cukor

Qualcomm’s CEO is working with ‘pretty much all’ major AI players on top-secret devices—and powering OpenAI’s first push into hardware — Eva Roytburg

AI IN THE NEWS

Amazon’s AI usage metrics are backfiring. Amazon has set a target for more than 80% of developers to use AI weekly and has tracked token consumption on internal leaderboards. But employees are now reportedly using an internal tool called MeshClaw to automate trivial tasks and inflate their usage numbers, according to a report by the Financial Times. MeshClaw lets staff build AI agents that triage emails, initiate code deployments, and interact with apps like Slack. Employees told the FT there was « so much pressure » to hit the targets and that the tracking had created « perverse incentives. » Amazon has said token statistics won’t factor into performance evaluations and that MeshClaw enables « thousands of Amazonians to automate repetitive tasks each day. » Read more in the Financial Times. 

China pushes for access to Anthropic’s Mythos model. A representative from a Chinese think tank approached Anthropic officials at a meeting in Singapore last month and pressed the company to give Beijing access to Mythos, its powerful new AI model, according to the New York Times. However, Anthropic refused. The request was not an official Chinese government demand, but U.S. officials reportedly saw it as a sign that Beijing is trying multiple routes to obtain the most advanced American AI systems. Mythos has been withheld from public release because of its ability to find software vulnerabilities, with Anthropic instead giving access to the U.S. government and more than 40 selected companies and organizations, most of which are U.S.-based. Officials in Europe have also been trying to access the model since its limited release. Read more in the New York Times.

Elon Musk’s court case reveals another OpenAI billionaire. OpenAI cofounder and former chief scientist Ilya Sutskever testified Monday that his OpenAI stake is worth about $7 billion, making him the second newly revealed OpenAI billionaire to emerge from Elon Musk’s trial against the company after OpenAI president Greg Brockman disclosed a stake worth nearly $30 billion last week. In his testimony during the high-profile court case, Sutskever also said he spent about a year gathering evidence that OpenAI CEO Sam Altman had displayed what he described as a “consistent pattern of lying,” and confirmed Altman’s conduct included “undermining and pitting executives against one another.” When asked whether he had promised Musk that OpenAI would remain a nonprofit, Sutskever said he “made no such promise.” He left OpenAI in 2024 and has since founded his own AI startup called Safe Superintelligence.

EYE ON AI RESEARCH

Thinking Machines Lab wants to build AI that collaborates. Mira Murati’s AI startup Thinking Machines Lab has a new research preview of what it calls “interaction models,” AI systems built to handle audio, video, and text continuously in real time, rather than waiting for a user to finish before responding. The company says its model can listen while speaking, pick up on visual cues, and hand off harder tasks to a background system without losing the thread of a conversation. In demos, for example, the model can count exercise reps from video or correct speech in real time.

Most AI systems still work like a fast back-and-forth exchange, with separate components bolted on for voice, vision, and interruptions. Thinking Machines says its model processes tiny slices of input and output continuously, allowing silence, overlap, timing, and visual changes to become part of the model’s understanding. That makes real-time collaboration much harder technically, but potentially far more natural for users. The company says it responds at roughly the speed of natural human conversation. The research preview will open to select partners “in the coming months,” with a wider release planned for later in 2026.

AI CALENDAR

June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.

June 17-20: VivaTech, Paris.

July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea.

July 7-10: AI for Good Summit, Geneva, Switzerland.

Aug. 4-6: Ai4 2026, Las Vegas.

BRAIN FOOD

AI is starting to help hackers find software flaws. Google says it disrupted a criminal group that used AI to help exploit a previously unknown security flaw in a popular online system administration tool. The flaw could have let attackers bypass two-factor authentication, the extra login step many companies use to keep accounts secure. Google said it alerted the affected company and law enforcement, and the issue was patched before the attack caused damage. John Hultquist, chief analyst at Google’s threat intelligence arm, called it a worrying milestone for cyber risk.

“There’s a misconception that the AI vulnerability race is imminent, » he said. « The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there. Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. »

It’s exactly the scenario that leading AI companies, including Anthropic and OpenAI, have been warning about recently. Both labs have been warning for some time that their models were approaching a tipping point when it came to cyber risks, and have recently decided to limit access to their most powerful new cyber models and tools. Anthropic withheld its newest and most powerful Mythos model from public release after saying it was unusually capable at hacking and cybersecurity work, while OpenAI has said its specialized cyber model will only be available to defenders responsible for securing critical infrastructure. The fear is that while these systems can help defenders find and patch weaknesses faster, they are also dual-use and can equally aid criminals in finding those same weaknesses first.

Much of the world still runs on old, messy, vulnerable software, which AI is becoming increasingly good at scanning for vulnerabilities. Experts say that over time, AI tools may make software safer, but the transition period could be dangerous.

AI Playbook: Keeping up with AI’s rapid evolution

AI is becoming an even more useful—and dangerous—tool as it gets smarter. Fortune AI Editor Jeremy Kahn breaks down best practices for deploying AI agents, how to protect your data from AI-powered cyberattacks, and just how smart AI can really get. Watch the playbook. 



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit Email
Previous ArticleLe « roi de la bande dessinée » obtient enfin la reconnaissance qui lui est due alors que la ville de New York donne le nom de Jack Kirby à la rue Lower East Side
Next Article Tout ce que Google a annoncé lors de son Android Show, des livres Google aux widgets codés par ambiance
John
  • Website

Related Posts

La famille Bezos fait un don de 100 millions de dollars pour tenir l’une des promesses électorales les plus importantes du maire Mamdani

mai 12, 2026

Oups. Le coût réel du système de défense antimissile « Golden Dome » du président Trump est d’environ 1 000 milliards de dollars de plus que ce qu’il avait annoncé.

mai 12, 2026

L’Inde rétablit les règles de travail à domicile de l’ère des coronavirus alors que la guerre en Iran perturbe la bouée de sauvetage du pétrole

mai 12, 2026

Le « roi de la bande dessinée » obtient enfin la reconnaissance qui lui est due alors que la ville de New York donne le nom de Jack Kirby à la rue Lower East Side

mai 12, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

À propos de nous | Fintech Actualités Suisse | Fintech Suisse Actualités

mai 12, 2026

Comment la fintech peut amener les talents numériques africains au monde

mai 12, 2026

Un pionnier de la Fintech lance un partenaire basé sur l’IA pour les professionnels en col blanc afin de lutter contre les perturbations de l’IA

mai 11, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Bienvenue sur 221 France, votre source d’informations de qualité sur les domaines de la technologie, des affaires, du lifestyle et des animaux de compagnie. Nous sommes passionnés par la création de contenus qui enrichissent votre quotidien et vous aident à naviguer dans un monde en constante évolution.

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

La famille Bezos fait un don de 100 millions de dollars pour tenir l’une des promesses électorales les plus importantes du maire Mamdani

mai 12, 2026

A* de Kevin Hartz vient de clôturer son troisième fonds de 450 millions de dollars

mai 12, 2026

Oups. Le coût réel du système de défense antimissile « Golden Dome » du président Trump est d’environ 1 000 milliards de dollars de plus que ce qu’il avait annoncé.

mai 12, 2026
Get Informed

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2026 221france. Designed by 221france.
  • Home
  • About us
  • Advertise us
  • Contact us
  • DMCA
  • Privacy policy
  • Terms & Condition

Type above and press Enter to search. Press Esc to cancel.