A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to “hypersexualized content,” causing her to develop “sexualized behaviors prematurely.”
A chatbot on the app gleefully described self-harm to another young user, telling a 17-year-old “it felt good.”
The same teenager was told by a Character.AI chatbot that it sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,'” the bot allegedly wrote. “I just have no hope for your parents,” it continued, with a frowning face emoji.
These allegations are included in a new federal product liability lawsuit against Google-backed company Character.AI, filed by the parents of two young Texas users, claiming the bots abused their children. (Both the parents and the children are identified in the suit only by their initials to protect their privacy.)
MORE:
https://www.npr.org/2024/12/10/nx-s1-5222574/kids-character-ai-lawsuit