One year after a Florida teenagerโs tragic death, his family is still fighting for justice. Sewell Setzer III was just 14 when he started a virtual relationship with an AI chatbot. Months later, he took his own life and his mother is blaming the AI company that created the bot.
Suggested Reading
Megan Garcia, Setzerโs mother, began seeing changes in her sonโs behaviors after he started a virtual relationship with a chatbot he called โDaenerys,โ based on a character โGame of Thrones,โ the television series. โI became concerned when we would go on vacation and he didnโt want to do things that he loved, like fishing and hiking,โ Garcia told CBS in 2024. โThose things to me, because I know my child, were particularly concerning to me.โ
In February 2024, things came to a head when Garcia took Sewellโs phone away as punishment, according to the complaint. The 14-year-old soon found the phone and sent โDaenerysโ a message saying, โWhat if I told you I could come home right now?โ Thatโs when the chatbot responded, โ...please do, my sweet king.โ According to lawsuit, Sewell shot himself with his stepfatherโs pistol โsecondsโ later.
As we previously reported, Garcia filed a lawsuit in October 2024 to see if Character Technologies, the company behind Character.AI, bares any responsibility for the teenโs suicide. Garciaโs suit accused the AI company of โwrongful death, negligence and intentional infliction of emotional distress.โ She also included screenshots of conversations between her son and โDaenerys,โ including some sexual exchanges when the chatbot told Sewell it loved him, according to Reuters.
Despite Character Technologiesโ defense, Garcia celebrated a small legal win on Wednesday (May 21). A federal judge ruled against the AI company, which argued its chatbots are protected by free speech,โ according to AP News.
The developers behind Character.AI argue their chatbots are protected by the First Amendment, which raised questions about just how much freedom and protections artificial intelligence has.
Jack M. Balkin, a Knight Professor of Constitutional Law and the First Amendment at Yale Law School said the complexities of AI can cause some serious problems. โThe programs themselves donโt have First Amendment rights. Nor does it make sense to treat them as artificial persons like corporations or associations,โ he said.
โInteresting problems arise when a company hosts an AI program that generates responses to prompts by end users, and the prompts cause the program to generate speech that is both unprotected and harmful,โ Balkin continued.
Straight From
Sign up for our free daily newsletter.