
When the United Kingdom rolled out its new Online Safety Act, politicians patted themselves on the back. Finally, a bold solution to the problem of children stumbling across adult content online: mandatory age verification. Show your government ID, scan your face, let an algorithm squint at your wrinkles and decide if you’re old enough. Simple, right?
Well, the internet doesn’t do “simple.” It does “predictably chaotic.”
Because the very thing critics have been warning for years has now happened: websites that followed the rules bled out their audiences, while sketchier, non-compliant sites flourished like weeds.
According to The Washington Post, which analyzed traffic data from 90 major porn sites in the UK, the results are painfully clear.
- Compliant sites (the ones that set up ID checks and face scans) saw their British traffic collapse.
- Non-compliant sites (the ones that said “nah, we’re good”) watched their traffic double or triple.
"To evaluate the early effectiveness of the law’s rollout, The Post gathered U.K. visitor estimates over the past year for 90 of the largest porn sites as ranked by the market intelligence firm Similarweb. The Post then used a software tool known as a virtual private network, or VPN, to appear online as a U.K. user and check whether the sites verified a visitor’s age."
One site literally saw 350,000 UK visits in a month after ignoring the law. The moral of the story? On the internet, being responsible makes you less competitive.
"The analysis found that 14 sites didn’t do an age check, and that all 14 had seen major boosts in their traffic from U.K. users. One explicit site saw its U.K. visitor count double since last August, to more than 350,000 visits this month."
How We Got Here
The UK government imagined this would be like showing your ID at a nightclub: a quick check at the door, then party on. Instead, it turned into a bureaucratic nightmare. Sites that wanted to comply faced steep costs, imagine paying 10 to 25 cents per face scan for millions of users. Pornhub alone was looking at $13 million a day in potential charges just to prove its audience had wrinkles.
Meanwhile, the users? They simply left. A blog post from one adult platform put it bluntly: their three-day test of age verification showed 90% of users bailed the moment they saw the age wall. Some didn’t even bother trying to get in again.
"Preserving fair competition is one of the obligations of most states — but they simply don’t give a fuck about it. Right now, there are almost 3,000 (not an exaggeration) clones of our sites — not owned by us, but designed to look like our platforms, sometimes with a different makeover — stealing our content, and soon to be massively rewarded."
Where did they go? To the wild corners of the web. Non-compliant platforms happily gave tutorials on dodging the age gate, suggesting VPNs, Tor browsers, or just hopping onto one of thousands of cloned, unregulated, and often unsafe sites.
"Regulators have no clue where people will go — but what’s likely is that users will scatter across so many sites, apps, proxies, and channels that they’ll become untraceable, guaranteeing the failure of future regulations. And unlike today, many of those new destinations will be dangerous, unmoderated, and openly hostile to enforcement."
And to make it worse, UK officials themselves accidentally turbo-charged the workaround. When one politician went on national TV and warned people not to use VPNs to bypass the law… VPN downloads immediately spiked to the top of app store charts.
The UK isn’t alone. The US is rolling out its own wave of age-verification laws. Already, 25 states have passed similar rules since 2022. Some go beyond adult sites, demanding ID checks for social media, video sites, even message boards. The Supreme Court recently upheld Texas’s version, and Mississippi wants social media platforms to verify every user’s name and age. Some platforms, like Bluesky, have simply said, “You know what? Forget it,” and blocked users in those states entirely.
Civil liberties advocates are waving red flags. Not only do these laws create massive privacy risks (remember, someone has to store all those face scans and ID cards), but they also incentivize users to drift toward unmoderated, shadowy corners of the web. In other words: exactly the opposite of what regulators wanted.
Here’s where it gets interesting for us in Kenya. If this all sounds familiar, that’s because we’ve been having our own version of this debate.
- Earlier this year, proposals emerged for mandatory biometric data collection and age verification for online access.
- Around the same time, we saw the controversial Metered Internet Bill, which would’ve made browsing the web feel like loading airtime.
- And let’s not forget NACADA’s push to ban online alcohol sales in a bid to lock out minors, a move that would only drive deliveries to boda bodas and WhatsApp groups instead of solving the problem.
In every case, the theme is the same: good intentions meet bad design. Regulations that sound like “protecting children” or “ensuring safety” often end up being nothing more than regulatory theater — expensive, invasive, and ultimately counterproductive.
The Unintended Lesson
What the UK case proves — in hard numbers — is that laws like these don’t actually protect children. They don’t even reduce demand. They just shuffle users around the internet like pieces in a shell game, rewarding the sites least interested in following the rules. Meanwhile, the adults who do comply are handing over their most sensitive personal data (faces, IDs, bank details) to a patchwork of companies that, let’s be honest, aren’t immune to hacks or leaks.
So instead of creating a safer online environment, governments are:
- Driving users into sketchier corners of the internet.
- Creating juicy new databases of private information just waiting to be breached.
- And burdening smaller, independent platforms with compliance costs they can’t possibly pay.
All so that politicians can go on TV and say: “Look, we’re protecting children.”
The UK experiment should serve as a global warning, including here in Kenya. You can’t regulate human behaviour out of existence with clumsy laws, especially not on the internet. Users will always find a way around, and the path of least resistance is rarely the path regulators want.
Protecting children online is a noble goal. But that requires smarter approaches: better parental controls, improved digital literacy, and platforms held accountable for how they design and moderate. Not facial scans, not ID uploads, and certainly not regulations that punish the responsible while rewarding the reckless.
Because if the UK has shown us anything, it’s this: the more you squeeze, the more the internet slips through your fingers.
Discover more from Techish Kenya
Subscribe to get the latest posts sent to your email.