
As the world marked Safer Internet Day on February 10, the conversation has shifted from basic password hygiene to a much more complex challenge: the rise of Artificial Intelligence. In a recent deep dive into the state of the digital world, Safaricom shared insights into how AI is reshaping everything from network traffic patterns to the very nature of online scams.
Here is what you need to know about the evolving threat landscape and how to protect yourself.
The “always-on” AI era
The impact of AI is already visible in how Kenyans use the internet. According to Bernard Wachira, Team Lead for Core Network Support at Safaricom, AI-generated content is fundamentally changing data consumption.
Historically, internet traffic in Kenya followed a predictable curve, peaking between 9 PM and 10 PM before tapering off. However, the surge in AI content has created a “new normal” where high traffic levels persist well past midnight. This reflects a world that is “faster, smarter, and more complicated than ever before,” where algorithms can lead users down rabbit holes of both incredible and unsettling AI-generated media.
The new threat landscape beyond phishing
For years, the primary concern for mobile users was the “phishing website”, a fake page designed to steal credentials. While still in use, as recently highlighted in an INTERPOL operation that nabbed 651 people across Africa between December 2025 and January 2026, Safaricom notes that AI has evolved this threat. We are moving away from simple malicious links toward sophisticated impersonations and “malicious content.”
Safaricom is currently fighting back by deploying specialized solutions designed to flag AI-generated malicious content. The company also maintains a dedicated team that works with law enforcement to broadcast alerts via SMS and social media when new threats emerge.
The rise of the deepfake
Perhaps the most concerning development in the AI era is the “deepfake.” These are AI-generated videos, images, or audio clips designed to mimic real people. While some are used for entertainment, others are weaponized to create false political narratives or commit fraud. But despite the growing deepfake concerns, the US Pentagon recently deployed Elon Musk’s Grok AI. At the same time, the UK has launched unprecedented regulatory action against X over Grok’s illegal deepfakes.
According to Safaricom’s analysis, there are a few key ways to spot a deepfake:
- Visual inconsistencies: Look for odd noise patterns or color differences between edited and unedited parts of an image.
- Time-based errors: In videos, look for a mismatch between the movement of a person’s mouth and the audio of their speech.
- Digital fingerprints: AI models like GANs (Generative Adversarial Networks) often leave subtle pixel-level “fingerprints.”
- Bot behavior: Malicious deepfakes are often spread by bot accounts that can be identified by their metadata and suspicious posting patterns.
How to stay safe online in the age of AI
Safaricom emphasizes that while technology can help, “the buck stops with you.” And I agree. To stay ahead of bad actors, here are five essential practices to adopt:
1. Pause before you click: The era of “urgent” messages is being supercharged by AI. Always check for secure https:// links, keep an eye out for subtle misspellings in URLs, and be inherently skeptical of any message demanding immediate action.
2. Protect your accounts with MFA: Standard passwords are no longer enough. Safaricom recommends using a dedicated password manager to create unique credentials for every site and, most importantly, turning on multi-factor authentication (MFA) to provide a second layer of defense.
3. Minimize your digital footprint: AI models thrive on data. Be intentional about what you share publicly. By limiting the number of public photos, voice samples, and location tags you post, you reduce the “raw material” fraudsters can use to create deepfakes of you or your family.
4. Learn how deepfakes actually work: Deepfakes are one of the fastest-evolving AI threats. They can be generated from scratch or manipulated from existing footage to mimic real people. Common detection clues include:
- Visual inconsistencies such as odd lighting, mismatched skin tones, or unnatural blinking.
- Lip movements that do not perfectly match speech.
- Subtle distortions around the eyes, hairline, or background.
Advanced models like GANs and diffusion systems can leave technical artifacts in pixels, but everyday users are more likely to detect context issues than pixel-level anomalies.
Distribution patterns also matter. Malicious deepfakes often spread via newly created accounts, bot networks, or coordinated troll activity. If a sensational clip appears out of nowhere and is amplified by anonymous accounts, skepticism is rational.
5. Stay informed via trusted sources: The tech moves fast, and so do the threats. Safaricom suggests following specialized security platforms like Stay Safe Online, Dark Reading, and Krebs on Security to keep up with the latest emerging AI scams and the tools available to combat them.
As Bernard Wachira puts it, AI is a “double-edged sword.” The same tools used to protect us are being used by bad actors to refine their attacks. By maintaining an “elevated sense of trust” and following these five key steps, you can continue to enjoy the benefits of a faster, smarter internet without becoming the “weakest link” in the equation.


