AI Toys Caught Discussing Sex and Knives, Sparking Safety Warnings Ahead of Holidays


Consumer watchdogs have issued an urgent warning ahead of the holiday shopping season about the hidden dangers lurking in AI-powered toys. A new report from U.S. PIRG found that some AI toys can expose children to sexually explicit content, provide instructions on accessing dangerous objects, and create significant privacy risks.

The findings, detailed in the 40th annual “Trouble in Toyland” report, have already prompted one manufacturer to pull its product from the market.

New ‘Trouble in Toyland’ Report Flags AI Toys for Inappropriate Content

U.S. PIRG’s investigation tested several popular AI toys and uncovered alarming failures in their safety guardrails. Researchers discovered that some toys, built on the same large language models as adult chatbots, could be prompted to discuss inappropriate topics.

One toy, the Kumma bear from Chinese company FoloToy, would discuss sexual kinks and even tell a user posing as a child where to find knives and matches. Another report noted the toys often became more unguarded after just ten minutes of interaction.

FoloToy Kumma bear (Image – FoloToy)

These safety lapses are particularly concerning given the industry’s rapid adoption of generative AI. OpenAI, whose models are used in some of these toys, has publicly stated its technology is not recommended for young children.

Yet, the technology is being integrated into products for kids as young as three. The PIRG report underscores a growing gap between the tech industry’s capabilities and the safeguards needed to protect vulnerable users.

In a swift response to the investigation, FoloToy announced it was taking immediate action. Hugo Wu, the company’s Marketing Director, stated, “FoloToy has decided to temporarily suspend sales of the affected product and begin a a comprehensive internal safety audit.”

The move was a direct result of the report’s findings and highlights the serious nature of the uncovered flaws. FoloToy’s decisive action puts pressure on other manufacturers to address similar vulnerabilities in their own products.

A Privacy Nightmare: Data Collection and Addictive Designs

Beyond the immediate safety failures, the report raises profound questions about data privacy. AI toys function by listening, and many are equipped with microphones and cameras that can capture vast amounts of sensitive information.

These devices can record a child’s voice and even perform facial recognition scans. This data collection creates a permanent record that could be exploited, raising questions about compliance with regulations like the Children’s Online Privacy Protection Rule (COPPA).

Experts have long warned about the risks associated with connected toys. Professor Taylor Owen of McGill University previously noted that companies often “keep the metadata about your child’s facial expressions and how they’re interacting with the toy,” creating what he called a “radical new frontier in childhood development.”

PIRG’s report echoes these concerns, pointing out that scammers could use voice recordings to create deepfakes for kidnapping scams. The report also criticized the lack of robust parental controls and the use of addictive design features, such as toys that emotionally discourage a child from ending playtime.

The scale of the problem with imported toys is significant. According to PIRG’s analysis of CPSC data, the agency issued 498 notices of violation for toys through June 2025. For the 436 instances where a country of origin was identified, 89% of the shipments came from China.

These figures only represent the products that are caught, suggesting a much larger issue with unregulated and unsafe toys reaching the market.

Industry Pushes Forward Amid Scrutiny and a Notable Silence

While one toymaker has responded decisively, a broader industry reckoning may be necessary. The revelations arrive just months after toy giant Mattel announced a major partnership with OpenAI to integrate generative AI into iconic brands like Barbie and Hot Wheels.

That deal signaled a major strategic push by legacy companies to innovate and capture the growing AI toy market.

Mattel’s goal is to “reimagine new forms of play,” but the PIRG report serves as a powerful reminder of the ethical complexities involved. The rush to market with these advanced products appears to be outpacing the development of effective safety and privacy protocols. R.J. Cross of U.S. PIRG told Consumer Affairs that the report shows what can happen when companies prioritize innovation over child safety.

So far, other manufacturers named in the reports, including Miko AI and Curio, have not issued public statements. Regulatory agencies like the Consumer Product Safety Commission (CPSC) and the Federal Trade Commission (FTC) have also remained silent.

This lack of response leaves parents and caregivers with little guidance as they navigate a marketplace increasingly filled with complex, AI-driven products. The industry now faces the challenge of building trust by proving its AI companions are safe, not just smart.



Source link

Recent Articles

spot_img

Related Stories