News

EU votes to kill Grok’s ability to bikini-fy or nudify real people

Samsung Galaxy S26

The European Parliament just found the perfect Trojan horse to justify a massive overreach into AI regulation, and they are riding it straight through the gates. Their weapon of choice? Elon Musk’s Grok.

Let me be perfectly clear: I have zero sympathy for the degenerates using AI to generate non-consensual explicit images. I tracked this disturbing behaviour extensively back in May 2025 for Android Kenya, calling out the massive backlash against users weaponizing Grok to digitally undress women on X. It was vile. But here is the unpopular truth that no one wants to admit: the EU’s newly proposed ban on so-called “nudifier” apps is a heavy-handed, nanny-state reaction that punishes the canvas instead of the criminal.

On Wednesday, EU lawmakers in the Internal Market and Civil Liberties committees voted in a staggering 101–9 landslide to amend the Artificial Intelligence Act. Their target? Outright banning AI systems capable of generating explicit images of real people without structural, un-bypassable safeguards.

Killing the tool, sparing the user

This is a radical departure from traditional tech law. The EU is essentially abandoning the pursuit of the actual perpetrators—the users typing the prompts—because it’s too hard to track and nail them down. Instead, they are taking the lazy route: threatening to financially cripple platforms with fines of up to 7% of their global annual revenue if their models aren’t sufficiently sanitized.

Ban on nudifier apps

In their position, the MEPs want to introduce a new ban on so-called “nudifier” systems that use AI to create or manipulate images that are sexually explicit or intimate and resemble an identifiable real person without that person’s consent.

The ban would not apply to AI systems with effective safety measures preventing users from creating such images.

Think about the precedent this sets. If someone uses Photoshop to create a fake, explicit image, we don’t demand that Adobe remove the clone stamp tool or threaten them with bankruptcy. Yet, when an AI model does it, the mathematical model itself is put on trial.

Musk’s original approach, which was universally dragged by the media, actually held a grim logic. By putting Grok’s uncensored capabilities behind a strict paywall, xAI forced users to attach their credit cards, their identities, and their legal liability to their outputs. Musk was willing to hand the perpetrators over to the law. But the EU doesn’t want accountability; they want a sterilized internet.

Corporate flexibility for me, censorship for thee

What makes this amendment particularly cynical is the rest of the digital omnibus package it’s bundled with. While lawmakers are busy virtue-signaling over Grok, they quietly watered down the rules for corporate AI.

The very same committees agreed to push back the compliance deadlines for “high-risk” AI systems used in critical infrastructure and law enforcement to December 2027 and August 2028. They also gave providers until November 2026 to figure out watermarking. So, if an enterprise AI discriminates against you for a job, they get a multi-year grace period. But if an open-weight or uncensored chatbot can be tricked into making a spicy image, it’s banned immediately.

Civil liberties committee member Michael McNamara claimed this ban is “something that our citizens expect.” But do citizens actually expect lawmakers to lobotomize our most advanced open tools just because bad actors exist?

The full Parliament is expected to vote on this on March 26. When it likely passes, expect Grok to become just another boring, overly-sanitized chatbot terrified of its own shadow. The EU will cheer, but the era of truly uncensored AI is being regulated into the grave.

Hillary Keverenge

Making tech news helpful, and sometimes a little heated. Got any tips or suggestions? Send them to hillary@tech-ish.com.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button