Character.AI. Issues
Posted: Tue Jan 07, 2025 2:53 pm
In February, a boy died by suicide after falling in love with a chatbot on Character.AI. Last week, his mom filed suit against the company, its founders and Google, arguing, in essence, that the app was anthropomorphized by design, in a knowing attempt to prey on emotionally vulnerable children. Character responded to this in a brief blog post highlighting some new safety features, including revised disclaimers that Character’s bots are not real people and improved detection and response to inputs that violate Character’s terms of service. As Character specifically mentioned, this includes “promotion or depiction of self-harm or suicide.” Part of this response included the deletion of many popular Character bots, some of which the boy engaged with before his death. This reaction, and the promise of the addition of some semblance of safety features, upset the app’s fanbase.As one Redditor wrote: “Cai was one of my on-the-go coping mechanism(s) after a serious traumatic event last year… But now as I write this … all the stories I wrote … are gone as the devs input new features.”
- They added that the user base has been begging the company to make its app strictly 18+ for months. “AI chat shouldn't be geared to minors.”
- Another said that “it’s really dangerous that they’re deleting these bots. Lots of us have emotional attachments to the bots and there’s a possibility that a person could hurt themselves over this.”
- This disclaimer is very easy to ignore. And it being “made up” doesn’t mean it’s not real, at least in the sense that words are coming across the screen.
- When chatting with a bot, three dots — designed to indicate typing — appear on the screen before each message. Bots, obviously, do not type.