TikTok removed more than 580,000 videos in Kenya between July and September 2025 for violating its content policies, according to the company’s latest enforcement data. The figures highlight the scale of moderation on one of the country’s most popular social media platforms, as debates over privacy, consent, and online safety intensify.
The disclosures follow public outrage in Kenya over a Russian content creator accused of secretly recording women and posting clips on TikTok and YouTube, sparking concerns about how quickly platforms detect harmful content.
TikTok said 99.7% of the removed videos were taken down proactively before users reported them, and 94.6% were removed within 24 hours of posting. Approximately 90,000 live sessions in Kenya were also interrupted during the quarter for violating platform rules, representing around 1% of livestreams.
Globally, TikTok removed 204.5 million videos in the same period, with nearly 95% removed within a day. Automated systems accounted for 91% of all removals. The platform also deleted more than 118 million fake accounts and over 22 million accounts suspected to belong to users under 13.
The report comes amid rising concerns about covert recording technology. In the Kenyan case, speculation emerged that smart glasses may have been used to film women without consent. Privacy experts warn that awareness of indicators, such as LEDs signaling recording, remains low, and that hidden recordings constitute serious rights violations.
Kenyan lawyer Mike Ololokwe emphasized, “Consent to interaction does not equal consent to filming or publication. Digital platforms need to treat hidden recording as a serious rights violation because harm spreads long after posting.”
TikTok said its moderation system combines automated tools with human review to address harassment, misinformation, and other harmful content. The company has also expanded wellbeing features to help users, especially teenagers, manage screen time and digital habits.