TikTok flooded with AI-generated videos sexualising images of children, finding reveals
TikTok is facing renewed scrutiny after a new investigation revealed that AI-generated videos depicting what appear to be underage girls in sexualised clothing and poses have amassed millions of likes on the platform, despite clear policies prohibiting such content.
According to CNN, the findings, published on Thursday by Spanish online safety non-profit group Maldita.es, identified more than a dozen accounts actively posting videos of AI-generated minors dressed in lingerie, tight clothing or school uniforms, and sometimes positioned suggestively. Collectively, the accounts have attracted hundreds of thousands of followers.
The report says, several videos featured animations created through TikTok’s own “AI Alive” tool, while others appeared to have been produced using external artificial intelligence software. Many posts contained comments linking to Telegram chats advertising child pornography for sale.
According to Carlos Hernández-Echevarría, who led the research as assistant director and head of public policy at Maldita.es said it flagged 15 accounts and 60 videos to TikTok on Tuesday, 2 December, categorising them as involving “sexually suggestive behaviour by youth.”
Together, the accounts had nearly 300,000 followers, and their 3,900 videos had accrued more than 2 million likes. However, TikTok’s initial response raised concerns.
The company informed the organisation that 14 of the 15 accounts did not violate its policies, with only one account being “restricted.” Of the 60 videos reported, TikTok ruled that 46 did not breach platform rules, removing or restricting just 14. Following appeals, TikTok removed an additional three videos and restricted another.
Researchers noted that some of the videos left online included an AI-generated child depicted semi-nude in a shower, and others showing minors in lingerie or bikinis posed seductively.
“There is absolutely no way a human being sees this and does not understand what’s happening,” said Carlos Hernández-Echevarría. “The comments are extremely crude, full of the most appalling individuals making disturbing remarks.”
By Wednesday, at least one account and a video that TikTok's content review process previously indicated did not violate its rules were no longer. Carlos Hernández-Echevarría stated it didn't state the reason they weren’t taken down when first reported.
TikTok says it maintains a zero-tolerance policy on content involving the sexual exploitation or abuse of young people. Its community guidelines explicitly ban:
1.AI-generated images sexualising minors
2. Accounts that “focus on AI images of youth in clothing suited for adults”
3. Depictions or suggestions of sexual content involving a young person
The company says it uses automated detection systems, including vision, audio and text analysis alongside human moderators. Between April and June 2025, TikTok says it removed more than 189 million videos and banned over 108 million accounts, claiming that 99% of violating nudity-related content and 97% of AI-violating content were removed proactively.
A TikTok spokesperson declined to comment specifically on Maldita.es’ findings
The report also highlights how some of the TikTok accounts directed users to private Telegram chats selling child sexual abuse material for between €50 and €150. Maldita.es did not proceed with any transactions and referred the accounts to Spanish police.
Telegram spokesperson Remi Vaughn told CNN that the company has a “strict zero-tolerance policy” for child sexual abuse material and uses scanning tools to detect illegal content in public spaces.
Vaughn said that more than 909,000 public groups and channels linked to such material had been removed in 2025. While Telegram cannot proactively scan private encrypted groups, the platform accepts reports from NGOs globally to enforce its policies.
Maldita.es’ findings come amid wider concerns over TikTok’s ability to protect young users. A separate investigation by UK non-profit Global Witness, published in October, found that TikTok’s search function had suggested “highly sexualised” content to users who reported being 13 years old and browsing in “restricted mode.”
As more countries introduce stringent online safety laws such as Australia’s recent ban on social media use for under-16s, pressure is mounting on major tech platforms to address the exploitation risks amplified by generative AI tools.
“The situation is not nuanced at all,” Hernández-Echevarría said. “No reasonable person would look at this content and fail to see the harm.”
SOURCE: CNN