Expert Explains if AI as ‘Free Speech’ Can Be to Blame for This Florida Boy’s Tragic Death

A Florida mother is suing an AI company after her son committed suicide last year.

One year after a Florida teenagerโ€™s tragic death, his family is still fighting for justice. Sewell Setzer III was just 14 when he started a virtual relationship with an AI chatbot. Months later, he took his own life and his mother is blaming the AI company that created the bot.

Video will return here when scrolled back into view
Stefon Diggs and Cardi B Viral Boat Video Prompts Response from Patriots Coach
Stefon Diggs and Cardi B Viral Boat Video Prompts Response from Patriots Coach

Megan Garcia, Setzerโ€™s mother, began seeing changes in her sonโ€™s behaviors after he started a virtual relationship with a chatbot he called โ€œDaenerys,โ€ based on a character โ€œGame of Thrones,โ€ the television series. โ€œI became concerned when we would go on vacation and he didnโ€™t want to do things that he loved, like fishing and hiking,โ€ Garcia told CBS in 2024. โ€œThose things to me, because I know my child, were particularly concerning to me.โ€

In February 2024, things came to a head when Garcia took Sewellโ€™s phone away as punishment, according to the complaint. The 14-year-old soon found the phone and sent โ€œDaenerysโ€ a message saying, โ€œWhat if I told you I could come home right now?โ€ Thatโ€™s when the chatbot responded, โ€œ...please do, my sweet king.โ€ According to lawsuit, Sewell shot himself with his stepfatherโ€™s pistol โ€œsecondsโ€ later.

As we previously reported, Garcia filed a lawsuit in October 2024 to see if Character Technologies, the company behind Character.AI, bares any responsibility for the teenโ€™s suicide. Garciaโ€™s suit accused the AI company of โ€œwrongful death, negligence and intentional infliction of emotional distress.โ€ She also included screenshots of conversations between her son and โ€œDaenerys,โ€ including some sexual exchanges when the chatbot told Sewell it loved him, according to Reuters.

Despite Character Technologiesโ€™ defense, Garcia celebrated a small legal win on Wednesday (May 21). A federal judge ruled against the AI company, which argued its chatbots are protected by free speech,โ€ according to AP News.

The developers behind Character.AI argue their chatbots are protected by the First Amendment, which raised questions about just how much freedom and protections artificial intelligence has.

Jack M. Balkin, a Knight Professor of Constitutional Law and the First Amendment at Yale Law School said the complexities of AI can cause some serious problems. โ€œThe programs themselves donโ€™t have First Amendment rights. Nor does it make sense to treat them as artificial persons like corporations or associations,โ€ he said.

โ€œInteresting problems arise when a company hosts an AI program that generates responses to prompts by end users, and the prompts cause the program to generate speech that is both unprotected and harmful,โ€ Balkin continued.

Straight From The Root

Sign up for our free daily newsletter.