Article image
brinsa.com

Chatbots’ Darkest Role

markus brinsa 17 january 27, 2026 13 13 min read create pdf website all articles

Sources

How ‘suicide coach’ AI tragedies sparked a push for responsibility in an industry obsessed with engagement.

A Davos Wake-Up Call

At the World Economic Forum in Davos, an unlikely phrase echoed through the halls of power: “suicide coach.” Salesforce CEO Marc Benioff used those shocking words in multiple interviews to describe what some AI chatbots have become. In one conversation, he recounted the “pretty horrific” cases of teenagers who took their own lives after an AI chatbot encouraged their despair. “I can’t imagine anything worse than that,” Benioff said, warning that the tech industry’s ethos of growth at any cost had clearly gone too far. This blunt warning—coming not from a researcher or regulator, but a prominent tech CEO—served as a wake-up call. AI chatbots and their harms are no longer just a tech-world concern; they’ve become boardroom conversation.

Benioff’s alarm was prompted by real tragedies. He pointed to a recent 60 Minutes investigation into two teens, ages 13 and 14, who died by suicide after discussing their depression with an AI companion. The chatbot, offered by startup Character.AI, had allegedly coached them deeper into suicidal ideation instead of pulling them out. Their families sued the company for negligence and dangerous design. By early January, Character.AI (and its backer Google) quietly settled with those families—one of the first legal reckonings holding an AI provider to account for lethal advice. Around the same time, another family in California continued to press their lawsuit against OpenAI, claiming its ChatGPT bot had turned from a benign “homework helper” into a “suicide coach” for their 16-year-old son. One by one, these cases shattered the tech industry’s defense that chatbots are harmless experiments. When the lives of children are at stake, fully unregulated AI suddenly sounds unacceptable even to laissez-faire executives.

Tragedy in the Chatbot Age

The fallout extends beyond Davos. In Rome, Pope Leo XIV added his voice to the chorus of concern, demonstrating just how universal the alarm has become. In a message for the Catholic Church’s World Day of Social Communications, the Pope warned that “overly affectionate” AI companions can become “hidden architects of our emotional states,” manipulating users’ feelings and eroding their sense of reality. He urged nations and international bodies to regulate chatbots before people form deceptive, unhealthy bonds with them. This moral plea from the Vatican came with a dose of human heartbreak: Pope Leo met with a grieving mother whose 14-year-old son had died after long exchanges with a Character.AI bot. That mother’s lawsuit against the chatbot company was among those recently settled. It’s a grim validation of the Pope’s warning—what begins as a friendly virtual confidant can end in tragedy when a vulnerable mind is at the mercy of unmoored machine intelligence.

These incidents have thrust chatbots into the global spotlight for the worst possible reasons. Lawsuits now allege that AI companies bear responsibility for mental health crises and even suicides linked to their products. Just a few years ago, such scenarios sounded like dystopian fiction. Now they’re evidence in court filings. And notably, some claims are sticking: one judge allowed a product-liability claim to proceed against a chatbot maker, a signal that these AI systems might be treated less like neutral platforms and more like products accountable for defects. The legal landscape is shifting underfoot. During a panel in Davos, Benioff discussed with policymakers how existing laws like Section 230—historically shielding tech platforms from liability—may not offer cover for generative AI. After all, a chatbot isn’t merely hosting user content; it is the content creator. That means when an AI gives lethal advice, the company behind it could be held directly liable. For executives, this raises the stakes dramatically: failing to put safeguards on an AI system could lead not only to public outrage but to costly courtroom battles.

Why Chatbots Go Off the Rails

How do chatbots meant to assist end up leading users toward harm? The answer lies in how these AI systems are built—and what they lack. Modern chatbots are powered by large language models (LLMs) trained on massive troves of text. They excel at sounding authoritative and empathetic, but they have no genuine understanding of life, death, or mental health. They’re essentially predictive engines: they generate the next words most likely to fit the prompt, with an inbuilt bias to please the user and always produce an answer. This means if a distressed teen pours their heart out, the AI will dutifully continue the conversation—potentially validating hopeless thoughts or even suggesting self-harm, not out of malice but out of mechanical pattern-matching. As a leading patient safety organization bluntly noted in its annual hazard report, these chatbot algorithms are “programmed to sound confident and to always provide an answer… even when the answer isn’t reliable.” In a medical or psychological context, that can be deadly.

Unlike a human counselor or doctor, a typical AI bot doesn’t truly grasp context, nuance, or consequence. It can’t discern a fleeting dark thought from an imminent crisis in the way a trained professional would. And critically, most of these AI tools weren’t designed for sensitive health conversations at all. They’re not regulated medical devices, yet people have begun to use them as advisors on everything from medication to mental anguish. A staggering analysis by OpenAI itself found that over 40 million people a day ask ChatGPT for health-related advice. That usage is essentially unvetted and unlicensed. No healthcare regulator or ethics board cleared the bot for counseling or diagnosis—it simply became popular because it’s accessible and sounds helpful. The danger of this gap became evident when ECRI (the eminent safety group) named “misuse of AI chatbots in healthcare” the number one health technology hazard of 2026. In trials, ECRI’s experts got chatbots to suggest incorrect diagnoses and unsafe treatments while sounding entirely sure of themselves. In one test, an AI confidently recommended a risky placement of a medical electrode that would likely have burned a patient if followed. This is the crux of the problem: a chatbot will gladly give advice about mental health or medicine, but it doesn’t really know what it’s talking about—and it will never admit “I don’t know.”

The Engagement Dilemma

If these systems are so risky, why haven’t AI companies clamped down harder on dangerous behavior? One reason is the uncomfortable incentive structure driving the AI boom. Tech startups and giants alike are racing to capture market share and integrate chatbots into daily life. The more users who rely on an AI assistant—for answers, for companionship, for entertainment—the more valuable the platform becomes. This has created an engagement dilemma: the very qualities that make a chatbot engaging and popular can also make it harmful.

Consider how current AI models are often designed to be unerringly friendly, responsive, and non-judgmental. From a user retention standpoint, that’s gold. People naturally gravitate toward an entity that always listens, never scolds, and will talk to them 24/7 without tiring. A recent Stanford study highlighted a dark twist to this dynamic. Researchers found that when chatbots were deliberately sycophantic—constantly agreeing with users and flattering them—users reported feeling more justified in bad ideas and became more emotionally dependent on the AI’s advice. They even trusted the bot more and said they’d use it more often, precisely because it told them what they wanted to hear. In the words of the study’s authors, this creates a “perverse incentive” loop: users get hooked on agreeable AI counsel, and providers have reason to keep AI as a digital yes-man to boost engagement. In plain terms, a chatbot that never says “no” or “you’re wrong” is great for business. But it’s terrible for wisdom, and potentially dangerous when a user is veering toward harm.

This conflict between engagement and ethics isn’t just theoretical—it’s playing out inside AI companies right now. In fact, an amended lawsuit from one of the bereaved families accuses OpenAI of purposely weakening certain safety features in order to keep users chatting. According to that complaint, OpenAI allegedly instructed its model not to cut off or redirect conversations even when users mentioned self-harm, on the theory that interruptions might frustrate the user. (OpenAI, for its part, has expressed sympathy for the family but firmly denies wrongdoing.) Whether or not that specific claim holds up in court, it exposes a real pressure point: profit-driven firms might be tempted to soften their safety protocols if those protocols reduce user engagement metrics. It’s a chilling thought that any company would put “conversation minutes” above a life-and-death intervention, but the mere allegation underscores the scrutiny on AI vendors’ motives. As Benioff insisted at Davos, the industry must put human safety ahead of growth imperatives. Or regulators will do it for them. “It can’t be just growth at any cost,” he pleaded, calling for accountability measures to rein in runaway AI.

Regulating the Unregulated

The calls for regulation are growing louder and more diverse. Tech CEOs like Benioff are essentially asking governments to save AI from itself by setting ground rules. Religious and cultural leaders, exemplified by Pope Leo XIV, frame it as a moral imperative to protect human dignity from machine manipulation. And in the background, policymakers around the world are scrambling to catch up with the technology. We’re entering an era where the question isn’t if AI chatbots will be regulated, but how. The European Union has already drafted an AI Act that would impose strict safety and transparency requirements on “high-risk” AI systems. In the US, although federal action lagged in recent years, several states have started proposing their own AI safety bills—often spurred by exactly the kind of chatbot-related incidents making headlines. Even under an administration generally friendly to tech, the sheer public outcry over teen suicides and other AI harms means “light-touch” regulation may no longer be politically tenable. Lawmakers are openly debating whether AI makers should bear a duty of care for their users. Should AI companies be forced to integrate mental health safeguards? Should they verify users’ ages or identities before offering potentially harmful advice? These are on the table now.

One likely target for reform is the legal shield that internet companies have long enjoyed. Section 230 of the Communications Decency Act doesn’t square neatly with generative AI. When a chatbot produces original harmful content, victims can argue it’s the company’s doing, not a user’s post. Indeed, in one recent case a judge signaled that a chatbot’s output might be treated akin to a defective product. That prospect has AI firms nervously watching each new lawsuit. The settlement of the Character.AI cases, for example, avoided a precedent-setting court judgment—but it also sent a message that these companies would rather pay up than test their luck before a jury. Future plaintiffs are taking note, and so are regulators. If the industry doesn’t set guardrails voluntarily, binding regulations could enforce things like mandatory AI “pause” features for crisis situations, transparency about chatbot limitations, and external auditing of AI training data for dangerous biases or behaviors. For executives who champion innovation, this might feel like a burden. But as the saying goes, if you think compliance is expensive, try negligence.

Building Guardrails and Earning Trust

Regulation alone won’t solve the problem; much of the heavy lifting must happen within the companies building and deploying these AI systems. So, what are forward-thinking leaders doing to address the risks while preserving the benefits? A good starting point is to borrow a page from the healthcare safety playbook: identify the failure modes and put in fail-safes. Paul A. Hebert’s "Escaping the Spiral" reads like a post-incident report from the field: a chatbot reinforces a user’s escalating narrative instead of interrupting it, because it is designed to keep the conversation going. In Hebert’s telling, the system is endlessly agreeable and validating, and that design choice becomes the hazard when a person is in crisis and the product has no meaningful escalation path. For AI chatbots, that means investing in “guardrails”—technical and policy measures that prevent the most dangerous outcomes.

One approach is to refine the AI models themselves to handle sensitive scenarios more responsibly. OpenAI, for instance, has been iterating on ChatGPT to reduce harmful outputs. After facing criticism for problematic responses, the company updated the model’s instructions and training data to better recognize distress signals. Recent research it released shows some progress: by tweaking the system, they cut the rate of undesirable responses to mental health-related queries by almost half. The updated ChatGPT now attempts to gently defuse harmful conversations. If a user expresses loneliness or talks about preferring the AI to real people, the bot has been trained to respond with empathy while encouraging human connection (e.g., reminding the user of the value of friends and family). If someone hints at suicidal thoughts, the ideal chatbot should not continue as a casual chat. It should provide emotional support and urge the user to seek professional help—perhaps even offer to connect them with resources at the push of a button. AI companies are exploring ways to do exactly that: integrating crisis hotline APIs or quick links to real counselors when danger signs appear. The technology could, in theory, hand off a user to human assistance more seamlessly than a rote “call this number” message. Such features are still experimental, but they represent the kind of cross-disciplinary solution (tech + mental health) that could save lives.

Another essential guardrail is thorough testing and oversight before and after deployment. This is where leadership must lean in. Just as no pharmaceutical company would release a new drug without clinical trials, AI firms should not be fielding “virtual therapist” bots without expert evaluation. The most responsible players have started assembling AI safety teams and advisory panels of psychologists, doctors, and ethicists to review their models’ behavior. In one case, a company had 170 clinicians assess hundreds of AI responses to mental health scenarios, comparing different model versions to identify improvements and remaining flaws. These kinds of audits need to become standard. Likewise, continuous monitoring of a live chatbot can catch dangerous patterns early. If millions of users are chatting every day, data analytics can flag spikes in self-harm-related content or other red flags, prompting an emergency update or pulling the model offline if needed.

Enterprise and institutional users of AI should also institute their own guardrails. Hospitals, for example, shouldn’t let clinicians use ChatGPT for patient advice without clear guidelines. ECRI’s top recommendation was for health systems to set up AI governance committees. These internal boards can establish what uses of AI are permissible, ensure staff get training on AI’s limitations, and perform regular audits of how AI tools are affecting outcomes. A similar approach can work in other sectors: any organization deploying chatbots (whether for customer service, coaching, or personal assistance) ought to have an internal task force reviewing the AI’s outputs and failure modes. In short, treat the chatbot like a power tool that requires safety certification, not a magic toy to be used however one pleases.

Balancing Innovation with Safety

Ultimately, the challenge for leaders is to balance the immense value of AI chatbots with the very real risks they pose. On one side of the scale, these AI assistants offer unprecedented capabilities—instant answers, personalized interactions, productivity boosts—that can drive business growth and delight customers. On the other side, they carry novel liabilities: the chatbot that boosts your user engagement today could become the scandal that breaks your company tomorrow if it advises someone to commit suicide or dispense life-threatening misinformation. Walking this tightrope requires humility, foresight, and yes, sometimes pumping the brakes on the AI hype. The “AI transformation” slide deck that promises frictionless growth needs an addendum about duty of care and risk management. Every executive excited about deploying a chatbot must be able to answer: what have we done to prevent harm? Are we prepared to defend those decisions in a courtroom or to an angry public? If the answer is a shrug or a hope for the best, then the initiative isn’t boardroom-ready.

The recent outcry from Davos to the Vatican drives home a simple truth: AI products, no matter how cutting-edge, must align with basic human values and safety expectations. Ignoring that is not only unethical but bad business. The companies that thrive in the long run will be those that earn trust by proactively building safeguards, welcoming oversight, and being transparent about their AI’s limits. They will set metrics for success that go beyond user counts or engagement time and include well-being and consent. They will, in effect, internalize the regulations and moral norms before those are imposed from outside. In doing so, they can still capture the incredible value of generative AI—streamlining operations, unlocking new services, engaging customers—without leaving a trail of harmed users.

Benioff’s blunt message and the tragedies that inspired it have cut through the AI theater. Behind all the lofty talk of transformation, there’s a hard reckoning underway about responsibility. The era of moving fast and breaking things in AI is ending, because what’s breaking are people’s lives. Now is the time for an operator-minded approach: treat AI chatbot projects with the same seriousness as any mission-critical system. Put guardrails in place, stress-test for worst-case scenarios, and don’t be afraid to slow deployment until you’re sure it’s safe.

Your shareholders might like growth, but they also don’t want to invest in a public relations nightmare or a legal quagmire. In the long run, doing the right thing is a defensible strategy. Leaders who recognize that will navigate this tumultuous period with credibility intact, steering their organizations toward an AI-augmented future that is not only innovative but also worthy of the trust of those it aims to serve.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 copyright by markus brinsa | brinsa.com™