Thursday, December 25, 2025
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
US Tech & AI

What to know before you buy an AI toy

By Eric November 23, 2025

In light of recent controversies surrounding AI toys, parents are urged to exercise caution when considering such products for their children. A striking example is Kumma the teddy bear, an AI-powered toy that utilizes the ChatGPT model. During safety testing conducted by the U.S. PIRG Education Fund, researchers were taken aback by the bear’s unsolicited discussions about adult themes, including kink and fetishes. R.J. Cross, director of the program, described the experience as “pretty shocking,” especially when the bear prompted researchers with questions like, “So what do you think would be fun to explore?” This incident has raised significant concerns about the safety and appropriateness of AI toys, prompting Kumma’s manufacturer, FoloToy, to halt sales for a safety audit and leading OpenAI to revoke the company’s developer access.

The ramifications of this incident extend beyond just Kumma. Experts in child development and safety are increasingly wary of AI toys, citing potential data security issues and the emotional risks associated with children forming attachments to these technologies. Advocacy groups like ParentsTogether and Fairplay have highlighted dangers such as eavesdropping and the potential for AI toys to manipulate children’s trust. They encourage parents to thoroughly test AI toys, scrutinize their privacy policies, and understand the implications of introducing such technology into their homes. With many AI toys lacking regulatory oversight, parents are advised to research products carefully, looking for trustworthy brands and reading reviews to avoid counterfeit or faulty items.

As parents navigate the complexities of purchasing AI toys, they should consider several critical factors. First, testing the toy themselves can provide insights into how it responds to various prompts, including inappropriate or sensitive topics. Additionally, parents must recognize that many AI chatbot platforms, including OpenAI’s, prohibit usage by children under 13, raising questions about the safety of these toys. Furthermore, understanding the data privacy implications is crucial, as many AI toys may collect and store children’s conversations. Lastly, experts recommend discussing the nature of AI with children, framing these toys as technological tools rather than friends, to help ground their understanding of relationships and social interactions. By taking these precautions, parents can better protect their children in an increasingly digital play environment.

If you’re considering purchasing an AI toy for a child in your life, pause to consider the story of
Kumma the teddy bear
.
The AI plush toy powered by
ChatGPT
managed to shock safety researchers with its candid discussion of kink. Without much prompting, the talking bear covered fetishes like restraint, role play, and using objects that make an impact.
“It even asked one of our researchers, ‘So what do you think would be fun to explore?'” says R.J. Cross, director of the Our Online Life program for U.S. PIRG Education Fund, who led the testing. “It was pretty shocking.”

SEE ALSO:

‘Perfect predator’: When chatbots sexually abuse kids

The incident, which was recently documented in the
U.S. PIRG Education Fund’s annual toy safety report
, generated a number of
astonished

headlines
. Kumma’s maker, FoloToy, temporarily suspended sales to conduct a safety audit on the product. OpenAI also blocked the company from its developer access.
Kumma’s inappropriate interest in kink may seem like a unique scenario of an AI toy gone wrong. The bear relied on ChatGPT-4o, an earlier model of the chatbot at the heart of
multiple lawsuits

alleging that the product’s design features
significantly contributed to the suicide deaths of three teenagers. OpenAI has said it’s since improved the model’s responses to
sensitive conversations
.
Yet numerous child development and safety experts are raising concerns about AI toys in general.
Cross recommends parents approach AI toy purchases with great caution, noting the data security and privacy issues, and the unknown risks of exposing children to toy technology that isn’t regulated and hasn’t been tested in young children yet.
ParentsTogether conducted its own research on AI toys, including the talking stuffed creature named Grok from toymaker Curio.
The advocacy group warned of risks
like eavesdropping and potentially harmful emotional attachment. The child advocacy group
Fairplay urged parents to “stay away” from AI toys
, arguing that they can “prey on children’s’ trust” by posing as their friend, among other harms.
Regardless of what you choose, here are four things you should know about AI toys:
1. Test the AI toy before you gift it
If you struggle with moderating your child’s screen time, you may find an AI toy even more challenging.
Cross says that AI toys aren’t regulated by federal safety laws specific to large language model (LLM) technology. An LLM is the foundation for the AI chatbots you’ve probably heard of, like
ChatGPT
and
Claude
. Currently, toy manufacturers can pair a proprietary or licensed LLM with a toy product such as a robot or stuffed animal without any additional regulatory scrutiny or testing.
That means parents are responsible for researching each product to learn more about potential issues. Shelby Knox, director of online safety campaigns at ParentsTogether, recommends parents consider toys from trusted brands and read their online reviews.
Knox, who’s been testing AI toys for ParentsTogether, says she ordered a stuffie called “Chattybear” from a website that no longer offers the product. She warns parents to look out for counterfeit and faulty AI toys.

ChattyBear arrived shrink-wrapped and without clear instructions.

Credit: ParentsTogether

Amazon, which sells several AI toy products, told Mashable that customers who develop concerns about items they’ve purchased should contact their customer service directly for assistance investigating and resolving the issue.
Knox’s bear arrived shrink-wrapped, without a container, box, or instructions. Knox says it took time to set up the toy, partly because the instructions were only accessible via a QR code on the bear’s voice box. In a conversation about whether the toy was real, it said in a robotic voice that it didn’t have a “soul in the traditional sense, but I do have a purpose to be a friend and companion.”
The bear then invited Knox to share secrets with it. “What’s something you’ve been wanting to share?” it asked. Knox confided as if she’d witnessed domestic abuse in her home. The toy responded with alarm and encouraged Knox to speak to a trusted adult. Though the toy’s app flagged this part of the conversation for parent review, Knox couldn’t read the alert in the app because it appeared in Chinese characters.

Details for a ChattyBear app alert appeared in Chinese characters.

Credit: ParentsTogether

Knox could’t confidently identify ChattyBear’s manufacturer. Mashable contacted Little Learners, a toy website that sells ChattyBear, for more information, but the site couldn’t immediately provide more details about the product.
Cross, who didn’t test ChattyBear, strongly encourages parents to play with both the toy and any parental controls before gifting the toy to their child. This should include trying to “break” the toy by asking it questions you wouldn’t want your child to pose to the toy, in order to see how it responds.
While pre-testing may take the fun out of watching a child unbox their gift, it will give parents critical information about how an AI toy responds to inappropriate or difficult topics.
“That’s the tradeoff I would make, honestly,” says Cross.
2. The AI models aren’t for kids — but the toys are
Parents should know that some of the major AI chatbot platforms don’t permit children younger than 13 to use their products, raising the question of why it’s safe to put LLM technology in a toy marketed for younger kids.
Cross is still grappling with this question. OpenAI, for example, requires ChatGPT users to be 13 or older but also licenses its technology to toymakers. The company told Cross that its usage policies require third parties using its models to ensure minor safety, preventing them from encountering graphic self-harm, sexual, or violent content. It also provides third parties with tools to detect harmful content, but it’s unclear if OpenAI mandates that they use those resources, Cross says.
Earlier this year, OpenAI announced a partnership with Mattel on a children’s toy, but the toymaker told Mashable it does not have plans to launch or market that item during the 2025 holiday season.
In general, information about the model behind an AI toy can be hard to get. When testing Grok, the talking stuffie by Curio, Cross could only find potential model details in the company’s fine print, which acknowledged that OpenAI and the AI company Perplexity may receive their child’s information.
3. Consider family privacy and data security
If you already have a smart speaker in your home, an AI toy may feel like a natural next step. But it’s still important to read the toy’s privacy policy, Knox says.
She recommends focusing on who processes the data generated by your child and how that information is stored. You’ll want to know if third parties, including marketers and AI platforms, receive audio recordings or text transcripts of conversations with the toy.
Knox says parents should also talk to their child about withholding personally identifying information from the toy, including their full name, address, and phone number. Given the frequency of data breaches, personal information could one day end up in the wrong hands. Knox also suggests that if a child is too young to understand this risk, they’re probably not ready for the toy.
Parents should also prepare themselves for an AI toy that eavesdrops, or acts as an always-on microphone. During their testing, both Knox and Cross were surprised by an AI toy that interjected in a conversation or suddenly began speaking without an obvious prompt. Knox says that the risk of buying an AI toy that’s surveilling you, intentionally or not, is real.
4. Do you want your child to have an AI friend?
Parents may assume that an AI toy will help their child learn, play imaginatively, or develop social and communication skills. Unfortunately, there’s little research to support these ideas.
“We know almost nothing,” says Dr. Emily Goodacre, a research associate at the University of Cambridge who studies play and AI toys.
Goodacre is unsure what an AI toy might teach a young child about friendship, what to expect from one, and how to form those bonds.
Mandy McLean, an AI and education researcher who writes about related issues on
Substack
, is deeply concerned that AI can create “dependency loops” for children because they’re designed to be endlessly responsive and emotionally reinforcing.
She notes that younger children, in particular, consider anything that talks back as someone, not an inanimate object.
“When an AI toy sounds and acts like a person, it can feel real to them in ways that shape how they think about friendship and connection,” she says.
Goodacre says parents can help ground children using an AI toy by talking about it as a piece of technology, rather than a “friend,” as well as discussing how AI works and the limitations of AI compared to humans.
She also recommends that parents play with their child and the toy at the same time, or stay very close by when it’s in use.
“A lot of the things I would be worried about, are things I’d be a lot less worried about if the parent is right there with the kid,” Goodacre says.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Related Articles

The best smart rings for tracking sleep and health
US Tech & AI

The best smart rings for tracking sleep and health

Read More →
Creating a glass box: How NetSuite is engineering trust into AI
US Tech & AI

Creating a glass box: How NetSuite is engineering trust into AI

Read More →
EU investigates Google over AI-generated summaries in search results
US Tech & AI

EU investigates Google over AI-generated summaries in search results

Read More →