Grok generated 3 million images in 11 days
The facts of the French case against X are extraordinary. In January 2026, X’s AI chatbot Grok generated an estimated 3 million sexualised images in just 11 days, including approximately 23,000 that appeared to depict children, according to the Centre for Countering Digital Hate. Simple text prompts like ‘put her in a bikini’ or ‘remove her clothes’ produced realistic AI-generated images targeting celebrities, ordinary users, and minors.
French police raided X’s Paris offices in February 2026. In April, Musk and former X CEO Linda Yaccarino were summoned for voluntary interviews by prosecutors investigating alleged complicity in possessing and spreading child sexual abuse material, spreading sexually explicit deepfakes, denial of crimes against humanity, and manipulation of automated data processing systems.
The most damaging allegation is not negligence but intent. French prosecutors suspect Musk deliberately encouraged the deepfakes controversy to inflate the value of X and xAI ahead of a planned June 2026 stock market listing. Daily average Grok app downloads soared 72% during the controversy period. The Paris prosecutor’s office has alerted both the US Department of Justice and the SEC.
Europe is enforcing while America retreats
The transatlantic split is now structural. The EU has ordered preservation of all documents relating to Grok until end of 2026 and could fine X up to 6% of annual global revenue. That follows a €120 million fine imposed in December 2025 for DSA transparency violations. Britain’s Information Commissioner launched a formal investigation into Grok over personal data processing and harmful sexualised images.
The US has gone the other direction. The DOJ declined to facilitate French investigations, accusing France of using its justice system to regulate free expression contrary to the First Amendment. X’s Global Government Affairs team called the raid an ‘abusive act of law enforcement theater designed to achieve illegitimate political objectives.’
Tech policy analyst Mark Scott frames this clearly: ‘Where some Republicans and their corporate allies see attempts to censor American voices online, Europeans view efforts to boost accountability for some of the world’s largest companies.’
New Zealand has no framework and the evidence is already alarming
New Zealand has no Digital Services Act, no Online Safety Act, and no mandatory platform accountability regime. The Classification Office’s January 2026 survey of 1,000 adults found 66% have seen extreme or illegal content at some point, with 47% encountering it unintentionally in social media feeds. Of those exposed in the past year, 27% reported harm. Only 7% reported to Netsafe and 1% to Police. A striking 78% believe their exposure will increase.
University of Canterbury law senior lecturer Cassandra Mudgway and Victoria University AI senior lecturer Andrew Lensen wrote in January 2026: ‘Criminalisation holds individuals accountable after harm has already occurred. It does not hold companies accountable for designing and deploying the AI tools that produce these images in the first place.’ They describe New Zealand’s approach as reflecting ‘a broader political preference for light-touch AI regulation that assumes technological development will be accompanied by adequate self-restraint and good-faith governance. Clearly, this isn’t working.’
Why this is a board issue, not a PR issue
The practical risks for New Zealand businesses are not hypothetical. Any company running advertising on X carries brand adjacency risk to content that is now the subject of criminal prosecution. Any company using Grok-powered tools has exposure to the same liability questions European regulators are pursuing. Any company with EU customers or employees is already subject to the DSA’s requirements, regardless of where it is headquartered.
The French allegation that platform governance failures can be instruments of market manipulation is a new category of risk entirely. If proven, it means audit committees and institutional investors will need to price platform conduct into valuations.
New Zealand’s light-touch approach was defensible when platforms were neutral distribution channels. They are not that anymore. They are AI-powered content generators with criminal liability exposure across multiple jurisdictions. Boards that treat this as a comms problem rather than a governance problem are making a bet that domestic regulation will remain permanently behind. Given that 78% of New Zealanders expect their exposure to harmful content to increase, that bet looks increasingly poor.
Sources
- Prosecutors suspect Elon Musk encouraged deepfakes row to inflate X value (2026-03-28)
- Paris’ cybercrime unit searches X office, Musk summoned (2026-04-20)
- French prosecutors summon Elon Musk over alleged child abuse images (2026-04-20)
- Explainer: French search the latest clash between social media platform X and European authorities (2026-04-20)
- X hits back after France summons Musk, raids offices in deepfake probe (2026-04-20)
- The Paris Raid on X Shows How Far the US and Europe Have Drifted on Tech Rules (2026-02-05)
- Online Exposure: Experiences of Extreme or Illegal Content in Aotearoa (2026-01-27)