
The internet is a vast, wondrous place, a digital playground where we all learn, connect, and explore. Yet, for many, it can sometimes feel like a digital Wild West, full of hidden corners and unexpected dangers. Parents, guardians, and concerned citizens often worry about what lurks beyond the screen for their children, which is where the Communications Authority of Kenya (CA) steps in with a concrete answer: Industry Guidelines for Child Online Protection and Safety, adopted on April 29, 2025.
These are not mere suggestions; they represent a comprehensive rulebook for everyone from mobile carriers to game makers – and they kick in just six months later. That means by October 29, 2025, all affected companies must comply.
What exactly are these guidelines? In plain terms, the CA wants every tech and telecom player to put children’s safety front and center. The rules emphasize that children have a right to information and freedom of expression online, but within a safer, more controlled environment. In other words, kids shouldn’t be treated like mini-adults on the internet. Instead, everyone in the internet ecosystem is on the hook – phone companies, app developers, broadcasters, even device makers – to help protect children. The CA points out that safeguarding kids is a collective responsibility, so companies must publish clear child-protection policies and actually follow them.
Parents will be relieved that the rules stress education and creativity too, not just bans. The guidelines explicitly encourage “child-focused products and services” that inspire learning and play. This means apps and games should have safe modes or special kid-friendly versions – think the way YouTube Kids limits content. It’s a reminder that the internet can be a vibrant place for kids, not just a minefield. But to make sure that vibrancy doesn’t come with hidden dangers, the CA laid out some concrete requirements.
Who Has to Comply?
When I dug into the document, I found that whoever provides anything tech-related is basically included. This isn’t just Facebook, WhatsApp or Safaricom – it’s everyone in the ICT value chain. For example:
- Licensed operators: If a company holds any Communications Authority licence (under the 1998 Communications Act), it’s covered. So Kenyan telcos and ISPs fall under this.
- Device and software makers: Manufacturers of phones, tablets, computers – and the apps and services on them – must also comply. The rules explicitly mention hardware vendors and app developers, including those for schools.
- Online platforms and content services: Social media sites, streaming services, messaging apps – if they’re used by Kenyans, they need to adapt. They must embed the safety measures into agreements with any third parties who use their platforms.
- Broadcasters and media: Even TV and radio stations that provide content accessible to children must follow the general principles (and Kenya’s existing Broadcasting Code).
- Kids, parents, and educators: While not “regulated” entities, children are the protected class, and parents/teachers are explicitly mentioned as partners. The rules call for educating parents and kids about safe use and complaint channels.
In short, whether you’re making a parenting app or selling school tablets, these guidelines apply to you. The goal is to cast a wide net so children’s online experiences – in schools or homes – are all safer.
Key Requirements at a Glance
The 11-page guideline document is packed with specifics. Here are the highlights:
- Age verification. Services must actively check users’ ages or put kids on child-friendly modes. For online platforms, this could mean a signup process that flags under-18s. For mobile networks, it goes further: all SIM cards used by children must be registered per Kenya’s laws. (In practice, that already means any SIM needs to be linked to an ID number under the 1998 Communications Act and SIM Registration Regulations, but the rules reinforce it specifically for kids.) In fact, every subscriber must declare who will use the SIM, so networks know if a child will be on it. This is meant to curb fake profiles and anonymous trolls, but it also raises privacy worries (more on that below).
- Privacy by design and default settings. Companies must bake in children’s privacy from the start. Practically, that means collecting only minimal personal data about kids and setting the strictest privacy settings by default. For instance, any new app or gadget should come pre-set so kids’ profiles aren’t public, location isn’t shared, and data collection is limited. Device makers have to activate heightened security features on gadgets likely to be used by children. And any customer terms must warn that uploading or sharing any Child Sexual Abuse Material (CSAM) is absolutely forbidden. All this aligns with Kenya’s Data Protection Act (2019), which the guidelines explicitly name.
- Child-focused content and products. The rules encourage positive content: educational games, local-language material, and creative tools for kids. The guidelines speak of increasing affordable, productive and appropriate products and services targeting children. This means innovators in edtech, e-books, or child-friendly streaming have a green light (and maybe a push) to keep developing safer options.
- Complaint and reporting mechanisms. Every service must tell users how to report problems involving kids online. The CA requires companies to publish clear complaint processes, train staff to handle them, and submit quarterly reports on all child-related complaints to the regulator. If a user – child or parent – isn’t satisfied, the CA encourages escalating the issue to the Authority itself. In short, the rules make complaints official: tech companies can’t brush off reports of cyberbullying, harassment, or unsafe content aimed at kids.
- Transparency and accountability. If any content is blocked or taken down because it’s unsafe for kids, companies must be open about it. The guidelines call for deliberate increase in transparency about what’s removed, why, and how it affects children’s access to information. Providers must also collect and preserve evidence of child abuse or illegal content and share it with law enforcement. In other words, if our kids report online abuse, the platform should actively cooperate with police, not sweep it under the rug.
- Support for law enforcement. Relatedly, companies are required to back up police and security agencies investigating child abuse online. That includes things like logging evidence and assisting in probes, as long as it’s within Kenyan law. This dovetails with Kenya’s existing laws against child exploitation, reinforcing that online is not a lawless zone.
- Default safety on devices. Hardware makers must ship phones, tablets, and computers with safety features on by default, especially models likely used by kids. They also have to provide info (leaflets, tutorials) on how parents can enable parental controls and safe browsing modes built into the device.
- Regular self-assessment and publishing compliance. The CA will check up on all of this: companies are encouraged to self-audit, and the Authority will publish compliance stats quarterly. Remember, the clock is ticking: six months from April 29 is the cut-off. So by Oct 29, 2025 everyone needs these policies in place.
I’ve noted how these rules plug into Kenyan law. For example, the SIM registration rule is rooted in the Kenya Information and Communications Act (1998) and SIM Regulations of 2015. The guidelines demand alignment with those laws. Likewise, the Data Protection Act (2019) – Kenya’s main privacy law – is cited specifically for handling kids’ data. The guiding principle of “best interest of the child” is tied to the Children Act 2022. In short, this isn’t a legal vacuum. The new guidelines leverage existing statutes on privacy, child rights, and broadcasting to hold companies accountable.
Implementation Challenges and Concerns
All this sounds good on paper, but I can’t help worrying about the trade-offs. Take age verification. On one hand, verifying ages online could keep little ones out of adult chats. On the other, how do they do it without Big Brother-level data collection? Kenya’s Office of the Data Commissioner (ODPC) recently warned that any age-check must be proportionate, privacy-preserving and minimize data use. In other words, they’re aware of the risk that forcing a 12-year-old to scan their ID might expose them to identity theft or link them permanently to their internet history.
Discover more from Techish Kenya
Subscribe to get the latest posts sent to your email.