
TikTok’s latest Community Guidelines Enforcement Report (CGER) just dropped, and it provides a stark look at the sheer scale of content moderation required to keep the platform running in Kenya.
In the second quarter of 2025, covering April through June, the company removed 592,037 videos in Kenya alone for violating its rules.
But the more revealing number isn’t the total; it’s how they were caught. According to TikTok, its automated systems are doing the vast majority of the work. A massive 92.9 percent of those videos were pulled before they received a single view. Furthermore, 96.3 percent were removed within 24 hours of being posted, highlighting the speed of the platform’s AI-driven moderation.
The Global Moderation Machine
Zooming out, the Kenyan numbers are a fraction of TikTok’s global moderation effort. Worldwide, the company axed over 189 million videos during the same period. While that number seems enormous, TikTok is quick to point out it represents just 0.7 percent of all content uploaded to the platform.
This data underscores how reliant TikTok is on automation. Globally:
- 99.1 percent of all removals were “proactive,” meaning systems caught them, not users.
- 94.4 percent were taken down within 24 hours.
- Of the 189 million removals, 163.9 million were removed automatically by AI systems with no human intervention.
It’s not just video content. The company’s automated defenses are also in a constant battle against fake and underage users. In Q2, TikTok says it removed 76,991,660 fake accounts and booted 25,904,708 accounts it suspected belonged to users under the age of 13, the platform’s minimum age.
The company frames this as a hybrid approach, “integrating advanced automated moderation technologies with the expertise of thousands of trust and safety professionals.” But the numbers show the AI is clearly on the front lines, with humans presumably handling escalations, appeals, and the nuanced cases the AI misses.
TikTok is finally sharing data on its LIVE monetization police
For the first time, TikTok is also shedding light on how it polices its chaotic LIVE feature—specifically, its monetization rules. These guidelines are separate from standard community safety and are designed to “reward creators who stream safe, authentic, and high-quality content” (and, presumably, punish those who don’t).
In Q2, TikTok says it took action (including warnings and full-on demonetization) on:
- 2,321,813 LIVE sessions
- 1,040,356 LIVE creators
The company says “warnings serve as an opportunity to educate creators,” which is a gentle way of saying it’s the first step before cutting off a creator’s ability to earn money from their streams.
Even as its automated systems do the heavy lifting, TikTok’s report still includes the standard reminder that it “actively encourages its Community to report any content” that violates its standards. It’s a quiet admission that even with 163 million AI removals, the machines don’t catch everything.



