Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Tigers vs. Nationals Highlights | MLB on FOX

    Yankees vs. Blue Jays Highlights | MLB on FOX

    Freddie Freeman drives in Ohtani & Betts with second double to extend Dodgers’ lead vs. White Sox

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest VKontakte
    Sg Latest NewsSg Latest News
    • Home
    • Politics
    • Business
    • Technology
    • Entertainment
    • Health
    • Sports
    Sg Latest NewsSg Latest News
    Home»Technology»Mum can continue lawsuit against AI chatbot firm she holds responsible for son’s death | Science, Climate & Tech News
    Technology

    Mum can continue lawsuit against AI chatbot firm she holds responsible for son’s death | Science, Climate & Tech News

    AdminBy AdminNo Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The mother of a 14-year-old boy who claims he took his own life after becoming obsessed with artificial intelligence chatbots can continue her legal case against the company behind the technology, a judge has ruled.

    “This decision is truly historic,” said Meetali Jain, director of the Tech Justice Law Project, which is supporting the family’s case.

    “It sends a clear signal to [AI] companies […] that they cannot evade legal consequences for the real-world harm their products cause,” she said in a statement.

    Warning: This article contains some details which readers may find distressing or triggering

    Sewell Setzer III. Pic: Tech Justice Law Project
    Image:
    Sewell Setzer III. Pic: Tech Justice Law Project

    Megan Garcia, the mother of Sewell Setzer III, claims Character.ai targeted her son with “anthropomorphic, hypersexualized, and frighteningly realistic experiences” in a lawsuit filed in Florida.

    “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” said Ms Garcia.

    Sewell shot himself with his father’s pistol in February 2024, seconds after asking the chatbot: “What if I come home right now?”

    The chatbot replied: “… please do, my sweet king.”

    In US Senior District Judge Anne Conway’s ruling this week, she described how Sewell became “addicted” to the app within months of using it, quitting his basketball team and becoming withdrawn.

    He was particularly addicted to two chatbots based on Game of Thrones characters, Daenerys Targaryen and Rhaenyra Targaryen.

    “[I]n one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) ‘get really depressed and go crazy’,” wrote the judge in her ruling.

    A conversation between 14-year-old Sewell Setzer and a Character.AI chatbot, as filed in the lawsuit
    Image:
    A conversation between Sewell and a Character.ai chatbot, as filed in the lawsuit

    Ms Garcia, who is working with the Tech Justice Law Project and Social Media Victims Law Center, alleges that Character.ai “knew” or “should have known” that its model “would be harmful to a significant number of its minor customers”.

    The case holds Character.ai, its founders and Google, where the founders began working on the model, responsible for Sewell’s death.

    Ms Garcia launched proceedings against both companies in October.

    A Character.ai spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent “conversations about self-harm”.

    A Google spokesperson said the company strongly disagrees with the decision. They added that Google and Character.ai are “entirely separate” and that Google “did not create, design, or manage Character.ai’s app or any component part of it”.

    Defending lawyers tried to argue the case should be thrown out because chatbots deserve First Amendment protections, and ruling otherwise could have a “chilling effect” on the AI industry.

    Judge Conway rejected that claim, saying she was “not prepared” to hold that the chatbots’ output constitutes speech “at this stage”, although she did agree Character.ai users had a right to receive the “speech” of the chatbots.

    Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@samaritans.org in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Admin
    • Website

    Related Posts

    Football Fans In The UK Will Be Able To Watch Every Match Of This Summer’s FIFA Club World Cup FREE On DAZN

    Draft proposal looks to put EHR reform measures back on the table

    Airbus’ HTeaming gives helicopter crews in-flight UAS control   

    Get ready for watchOS 26 with $100 off a brand new Apple Watch Series 10

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Microsoft’s Singapore office neither confirms nor denies local layoffs following global job cuts announcement

    Google reveals “material 3 expressive” design – Research Snipers

    Trump’s fast-tracked deal for a copper mine heightens existential fight for Apache

    Top Reviews
    9.1

    Review: Mi 10 Mobile with Qualcomm Snapdragon 870 Mobile Platform

    By Admin
    8.9

    Comparison of Mobile Phone Providers: 4G Connectivity & Speed

    By Admin
    8.9

    Which LED Lights for Nail Salon Safe? Comparison of Major Brands

    By Admin
    Sg Latest News
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • Get In Touch
    © 2025 SglatestNews. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.