Pages

Friday, 25 October 2024

Teen Commits Suicide After Falling For ‘Game Of Thrones’ AI Chatbot, Mother Files Lawsuit

 A teenager from Orlando committed suicide following an intense relationship with an A.I. generated chatbot made to resemble the “Game of Thrones” character Daenerys Targaryen, prompting his family to file a lawsuit this week against the company that created it.

The lawsuit accused the Character.AI software of being “dangerous and untested,” adding that it can be used to “trick customers into handing over their most private thoughts and feelings.”

Fourteen-year-old Sewell Setzer III developed a relationship with the chatbot over several months, his family said, frequently engaging with the character to get advice and seek companionship, per The New York Times. The outlet noted that Setzer did not think the bot was a real person, but despite that, he started isolating himself from the outside world and spending more time interacting with it.

 

“I like staying in my room so much because I start to detach from this ‘reality,’” the 14 year-old wrote in his diary. “I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

The teen stopped playing online computer games with his friends or participating in other hobbies. The outlet noted that Setzer had been diagnosed with a mild form of Asperger Syndrome, which is a condition on the autism spectrum.

When Setzer started doing poorly in school, his family enrolled him in therapy, where he was diagnosed with anxiety and disruptive mood dysregulation disorder.

Setzer’s mother, Megan Garcia, blamed the company for not having safeguards in place for users. She said her son discussed suicide with the bot and also had sexual conversations. 

Garcia said her son believed that dying would bring him closer to the bot, which is what he expressed in some of their final conversations. She said her son was “collateral damage” in a “big experiment” conducted by the platform Character.AI.

“It’s like a nightmare. You want to get up and scream and say, ‘I miss my child. I want my baby,’” she told the outlet.

Character.AI safety head Jerry Ruoti told The Times that the company would institute more safety features. “This is a tragic situation, and our hearts go out to the family,” he said in a statement. “We take the safety of our users very seriously, and we’re constantly looking for ways to evolve our platform,” Ruotie said, adding that “the promotion or depiction of self-harm and suicide” are banned on the platform.

No comments:

Post a Comment