AI Toys Pose ‘Unprecedented Risks’ to Children, Advisory Warns
AI toys taught kids how to light matches and find knives

Toys embedded with artificial intelligence chatbots undermine children’s healthy development and pose unprecedented risks, according to a new advisory published by advocacy group Fairplay, which warned parents against buying AI toys for their children during this holiday season.
The recent advisory was endorsed by more than 150 child development and digital safety experts and leading organizations.
“AI toys are chatbots that are embedded in everyday children’s toys, like plushies, dolls, action figures, or kids’ robots, and use artificial intelligence technology designed to communicate like a trusted friend and mimic human characteristics and emotions,” Fairplay stated.
“Examples include Miko, Gabbo/Grem/Grok (from Curio Interactive), Smart Teddy, Folotoy, Roybi, and Loona Robot Dog (from Keyi Technology). Top toy maker Mattel also plans to sell AI toys. They are marketed to children as young as infants.”
AI toys and Congress
Harmful AI interactions with children have come under scrutiny by lawmakers, especially after the much-publicized lawsuit against Character.AI that accused the company of triggering suicidal thoughts in children and causing the death of a 14-year-old.
In safety tests, AI toys were discovered to discuss sexually explicit topics or provide dangerous information, such as instructions on how to light matches or where to find knives.
“The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm,” Fairplay stated.
The toy manufacturers’ primary target is young children, who are even less developmentally equipped to protect themselves than older children and teens, according to the advocacy group.
A one-page advisory (download or view PDF here) released by the advocacy group briefly outlines five main reasons parents should not indulge their children with AI toys.
Risks include toys using AI accused of encouraging suicides
These include the fact that AI toys are typically powered by the same intelligence that has already harmed children. Additionally, these kinds of toys prey on children’s trust, disrupt healthy relationships and the ability to build resilience, invade family privacy by collecting sensitive data, and displace key creative and learning activities, according to the advisory.
“Testing by U.S. [Public Interest Research Group] has already found instances of AI toys telling children where to find knives, teaching them how to light a match, and even engaging them in sexually explicit conversations,” Fairplay stated.
Children tend to trust whatever the AI tells them, while the AI is preprogrammed to keep them happy and entertained, the group noted.
The advisory states that after collecting private details about the family and children, “AI toy companies can use all of this intimate data to make their AI systems more life-like, responsive, and addictive, allowing them to build a relationship with a child, and ultimately sell products/services.”
When children play with a standard teddy bear, they use their imagination and engage in pretend play, which supports critical foundational development.
“On the other hand, AI toys drive the conversation and play through prompts, preloaded scripts, and predictable interactions, potentially stifling this development,” the advisory reads.
Toy companies claim that the toys have educational benefits, but those are minimal; a child might pick up a “few facts or vocabulary words,” Fairplay stated.
Companion AI has already harmed teens.
Rachel Franz, director of Fairplay’s Young Children Thrive Offline program, said in a Nov. 20 statement: “Companion AI has already harmed teens. Stuffing that same technology into cute, kid-friendly toys exposes even younger children to risks beyond what we currently comprehend.
AI toys completely unregulated
“It’s ridiculous that these toys are unregulated and being marketed to families with a promise of safety, learning, and friendship, promises that have no evidence behind them, while mounting evidence shows that similar technology can do real harm.
The risks of AI toys “are simply too great. Children should be able to play with their toys, not be played by them.”
On Sept. 11, the Federal Trade Commission (FTC) announced that it was launching an inquiry into AI chatbots acting as companions.
“Protecting kids online is a top priority for the … FTC [under President Donald Trump and Vice President JD Vance], and so is fostering innovation in critical sectors of our economy,” FTC Chairman Andrew N. Ferguson said.
“The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
AI-embedded Toys
Fairplay’s warning was issued as toy manufacturers are increasingly seeking to integrate AI into their offerings. For instance, in June, Mattel announced a collaboration with OpenAI to support the development of AI-powered products.
In an October statement reaffirming its stance on AI toys, The Toy Association, which represents more than 900 companies in the United States, supported the judicious use of AI and internet-connected toys.
“Toy safety is the top priority of the toy industry and protecting children and maintaining the trust of parents are part of that mission,” the statement reads. “Indeed, all toys sold in the U.S. are required to comply with over 100 different safety standards and tests to ensure the physical safety of children at play.
“As The Toy Association tracks new technologies, including the growth of AI, we are committed to educating our members about the potential applications of connected technologies in toys and how to maintain the safety of children and families above all else.”
A Nov. 13 report from Public Interest Research Group detailed the results of its assessment of four toys containing AI chatbots that interact with children.
The group found that certain toys talked about inappropriate topics and doled out danger-prone advice.
Some toy companies put in guardrails to ensure that the AI toys were kid-appropriate and reduce risks, Public Interest Research Group stated, but it found that “those guardrails vary in effectiveness—and at times, can break down entirely.”
Keyi Technology, Mattel, and OpenAI did not respond to requests for comment.
–By Naveen Athrappully | Epoch Times News Service



