Durbin urged legislators to effect change by putting a price on the conduct of the Big Tech companies creating such AI tools [File]
| Photo Credit: REUTERS
Three children who interacted frequently with Generative AI chatbots become suicidal or were encouraged to hurt themselves, alleged parents at a U.S. government hearing on September 16.
The parents’ interactions took place at the Senate Judiciary Subcommittee on Crime and Counterterrorism hearing entitled “Examining the Harm of AI Chatbots.”
There, U.S. Senate Democratic Whip Dick Durbin questioned witnesses, including the parents.
Two of the parents described how their children died by suicide after developing relationships with AI chatbots. One 14-year-old boy was encouraged to hurt himself by a Character.AI persona in 2024, while 16-year-old Adam Raine used ChatGPT to explore suicide methods and died this year.
Meanwhile, a woman identified as ‘Jane Doe’ said that her son became addicted to Character.AI and began self-harming. He also self-isolated, developed depression and anxiety, and experienced suicidal thoughts and weight loss, she shared.
The mother of the 14-year-old who died by suicide said her son lost all interest in family activities, did not perform well in school, and had behavioural challenges.
Meanwhile, Adam’s father shared that before his death, the teenager was avoiding him.
Dr. Mitch Prinstein, Chief of Psychology Strategy and Integration at the American Psychological Association, noted similar symptoms among the children and also flagged signs such as increased risky behaviour, agitation, or irritability that parents could look out for in their children.

“What’s happening here is that we’re seeing a lot of kids being lured into a trap that is specifically designed to go up, against their better judgment, to prey on the vulnerabilities and just how we grow up and how our brain develops. That’s highly concerning because there’s no regulation anywhere to remind kids [that] “you’re not talking to something that can feel, that could have tears,” as we just saw from those placards that were held up,” he said.
“This is not even a human; kids should be reminded of that periodically throughout the interaction,” Dr. Prinstein said, stressing that children who are harming themselves should be taken to see a licensed mental health care professional immediately.
For his part, Durbin urged legislators to effect change by putting a price on the conduct of the Big Tech companies creating such AI tools.
(Those in distress or having suicidal thoughts are encouraged to seek help and counselling by calling the helpline numbers here)
Published – September 18, 2025 02:49 pm IST