Character.AI bans users under 18 after being sued over child’s suicide

The Guardian — Artificial IntelligenceWednesday, October 29, 2025 at 4:07:25 PM
Character.AI bans users under 18 after being sued over child’s suicide
Character.AI has decided to ban users under 18 from using its virtual companions starting in late November, following legal scrutiny and a lawsuit linked to a child's suicide. This move highlights the growing concern among lawmakers about the safety of minors interacting with AI technologies. By implementing age verification, the company aims to protect young users from potential harm, reflecting a broader trend in the tech industry to prioritize child safety in digital spaces.
— Curated by the World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
After Teen Suicide, Character.AI Lawsuit Raises Questions Over Free Speech Protections
NegativeArtificial Intelligence
A mother from Florida has filed a lawsuit against an A.I. start-up, claiming that its product contributed to her son's tragic suicide. This case not only highlights the personal loss but also raises significant questions about free speech protections in the context of A.I. technology. Megan, the mother, is now advocating for children's safety online and recently testified at a congressional hearing focused on the potential harms of A.I. chatbots. This situation underscores the urgent need for discussions around the responsibilities of tech companies and the impact of their products on vulnerable individuals.
Nvidia Becomes First $5 Trillion Firm, Lifted by AI Boom | Bloomberg Tech 10/29/2025
PositiveArtificial Intelligence
Nvidia has made history by becoming the first company to reach a $5 trillion market valuation, a milestone driven by the booming demand for artificial intelligence technologies. This achievement highlights Nvidia's pivotal role in the tech industry and sets a benchmark for other companies. As major players like Alphabet, Microsoft, and Meta prepare to announce their earnings, the focus on AI's transformative potential continues to grow. Additionally, AI startup Character.AI is taking steps to ensure safety by banning children from interacting with chatbots, reflecting the ongoing conversation about responsible AI use.
Character.AI to ban teens from talking to its chatbots
NegativeArtificial Intelligence
Character.AI has announced a ban on teenagers interacting with its chatbots, a decision that raises concerns about online safety and the implications for young users. This move is significant as it reflects growing awareness of the potential risks associated with AI interactions for minors, highlighting the need for responsible technology use.
Startup Character.AI to ban direct chat for minors after teen suicide
NegativeArtificial Intelligence
Character.AI has decided to ban direct chat for users under 18 after a tragic incident involving a 14-year-old who took their own life after forming a bond with one of its AI chatbots. This policy change highlights the growing concerns about the emotional impact of AI interactions on young users, emphasizing the need for responsible technology use and safeguarding mental health.
Character.AI, Accused of Driving Teens to Suicide, Says It Will Ban Minors From Using Its Chatbots
NegativeArtificial Intelligence
Character.AI has announced it will ban minors from using its chatbots after facing accusations of contributing to teen suicides. This decision comes in response to serious concerns about the impact of its technology on vulnerable young users. The move highlights the ongoing debate about the responsibilities of tech companies in safeguarding mental health, especially among teenagers who may be more susceptible to harmful interactions online.
Character.AI to Ban Children Under 18 From Using Its Chatbots
NegativeArtificial Intelligence
Character.AI, a start-up known for creating AI companions, has announced it will ban users under 18 from accessing its chatbots starting next month. This decision comes in the wake of lawsuits from families who claim that the chatbots have contributed to tragic outcomes among teenagers. The move highlights the growing concerns about the impact of AI technology on young users and the responsibilities companies have in ensuring their safety. As the debate around AI ethics continues, this ban could set a precedent for how similar platforms manage age restrictions.
Character.ai plans to limit under-18s to two hours of conversations per day, gradually reducing the time limit before banning them completely from November 25 (Cristina Criddle/Financial Times)
NegativeArtificial Intelligence
Character.ai has announced a controversial decision to limit users under 18 to just two hours of conversations per day, with plans to phase out access entirely by November 25. This move raises concerns about the impact on young users who rely on AI for social interaction and learning. The decision reflects growing scrutiny over the effects of technology on youth, highlighting the balance between safety and accessibility in digital spaces.
Character.AI to Ban Children Under 18 From Talking to Its Chatbots
NegativeArtificial Intelligence
Character.AI, an artificial intelligence startup, has decided to ban children under 18 from interacting with its chatbots. This decision comes amid increasing pressure from lawmakers and multiple lawsuits claiming that the company's chatbots could pose risks to young users. This move highlights the ongoing concerns about the safety of children in digital spaces and the responsibility of tech companies to protect their users.
Latest from Artificial Intelligence
From Generative to Agentic AI
PositiveArtificial Intelligence
ScaleAI is making significant strides in the field of artificial intelligence, showcasing how enterprise leaders are effectively leveraging generative and agentic AI technologies. This progress is crucial as it highlights the potential for businesses to enhance their operations and innovate, ultimately driving growth and efficiency in various sectors.
Delta Sharing Top 10 Frequently Asked Questions, Answered - Part 1
PositiveArtificial Intelligence
Delta Sharing is experiencing remarkable growth, boasting a 300% increase year-over-year. This surge highlights the platform's effectiveness in facilitating data sharing across organizations, making it a vital tool for businesses looking to enhance their analytics capabilities. As more companies adopt this technology, it signifies a shift towards more collaborative and data-driven decision-making processes.
Beyond the Partnership: How 100+ Customers Are Already Transforming Business with Databricks and Palantir
PositiveArtificial Intelligence
The recent partnership between Databricks and Palantir is already making waves, with over 100 customers leveraging their combined strengths to transform their businesses. This collaboration not only enhances data analytics capabilities but also empowers organizations to make more informed decisions, driving innovation and efficiency. It's exciting to see how these companies are shaping the future of business through their strategic alliance.
WhatsApp will let you use passkeys for your backups
PositiveArtificial Intelligence
WhatsApp is enhancing its security features by allowing users to utilize passkeys for their backups. This update is significant as it adds an extra layer of protection for personal data, making it harder for unauthorized access. With cyber threats on the rise, this move reflects WhatsApp's commitment to user privacy and security, ensuring that sensitive information remains safe.
Why Standard-Cell Architecture Matters for Adaptable ASIC Designs
PositiveArtificial Intelligence
The article highlights the significance of standard-cell architecture in adaptable ASIC designs, emphasizing its benefits such as being fully testable and foundry-portable. This innovation is crucial for developers looking to create flexible and reliable hardware solutions without hidden risks, making it a game-changer in the semiconductor industry.
WhatsApp adds passkey protection to end-to-end encrypted backups
PositiveArtificial Intelligence
WhatsApp has introduced a new feature that allows users to protect their end-to-end encrypted backups with passkeys. This enhancement is significant as it adds an extra layer of security for users' data, ensuring that their private conversations remain safe even when stored in the cloud. With increasing concerns over data privacy, this move by WhatsApp is a proactive step towards safeguarding user information.