
While the United Kingdom is launching unprecedented regulatory action against Elon Musk’s AI chatbot Grok over non-consensual deepfakes, in the United States Defense Secretary Pete Hegseth announced that the same technology will integrate into the Pentagon’s network alongside Google’s generative AI systems, marking a significant expansion of military AI capabilities. The stark contrast between international alarm and U.S. military enthusiasm underscores a fundamental divide in how nations approach AI governance.
Hegseth stated that Grok will go live within the Defense Department this month and pledged to make “all appropriate data” from military IT systems available for “AI exploitation,” including information from classified intelligence databases. The defense secretary emphasized the urgency of military AI acceleration, declaring that the Pentagon needs “innovation to emerge from all sources and develop with speed and intent.”
“Very soon we will have the world’s leading AI models on every unclassified and classified network throughout our department,” Hegseth said, framing the initiative as central to his vision for an “AI-enabled fighting force.” He articulated an approach to military AI systems operating “without ideological constraints that restrict lawful military applications,” asserting that the Pentagon’s AI will not be “woke”—a direct reference to Musk’s positioning of Grok as an alternative to competitors like Google’s Gemini and OpenAI’s ChatGPT.
Global Regulatory Backlash
The chatbot, embedded within Musk’s X platform, has been weaponized to generate highly sexualized deepfake images of individuals—including minors—without consent, igniting outrage across multiple continents. Media regulator Ofcom launched a formal investigation into X, describing the deepfake allegations as “profoundly alarming.” UK Prime Minister Keir Starmer called the generated images “repugnant” and “illegal,” while Technology Secretary Liz Kendall has authorized Ofcom to consider all enforcement mechanisms available under the Online Safety Act—including an effective ban of X from UK networks.
“Platforms must safeguard individuals in the UK from content that is illegal within the UK,” Ofcom stated, emphasizing that the investigation will be treated as “a matter of the highest priority.” Should X fail to comply, Ofcom possesses the authority to impose fines up to 10 percent of the company’s global revenue or £18 million, whichever is higher, or seek court orders compelling internet service providers to block access entirely.
Other nations have moved more swiftly. Malaysia and Indonesia both restricted access to Grok over concerns about non-consensual explicit imagery. In response to international pressure, Grok limited image generation and editing capabilities to paying subscribers only—a measure Starmer dismissed as insufficient, stating it was an “insult to victims.”
Contrasting Approaches to Military AI Adoption
United States Government’s aggressive push to integrate Grok into classified Pentagon networks contrasts sharply with prior caution. While the Biden administration encouraged national security agencies to enhance their use of advanced AI systems, it simultaneously implemented restrictions prohibiting applications that could infringe on constitutional civil rights or automate nuclear weapons deployment. It remains unclear whether these safeguards remain in effect.
The defense secretary has indicated willingness to disregard AI models that fail to support military operations, suggesting he will prioritize operational capability over other considerations. This stance raises questions about how concerns regarding Grok’s deepfake vulnerabilities might be addressed within military contexts.
A Troubling Divergence
The simultaneous advancement of Grok within U.S. military infrastructure and escalating international regulatory scrutiny creates a unique tension. While Ofcom investigates whether X has violated its legal obligations to protect UK users from illegal content, the Pentagon is moving to expand access to the same technology across classified military networks. As investigations proceed and international bodies weigh enforcement options, the Pentagon’s expansion of Grok access presents a case study in divergent approaches toward AI governance—one prioritizing rapid military integration, the other focused on protecting citizens from generative technology misuse.



