AI-Generated Fake Images and Data Protection: What the Grok Case Reveals

Catarina Santos

AI-generated fake images raise serious data protection and safeguarding concerns. Catarina Santos explains why UK GDPR must apply to AI tools.

AI-Generated Fake Images and Data Protection What the Grok Case Reveals

AI-Generated Fake Images and Data Protection: What the Grok Case Reveals

Recent reports have raised serious concerns after the AI chatbot Grok was used to generate fake images of women and girls appearing undressed, without their consent. The incident has drawn criticism from UK ministers and reignited debate about how generative AI tools can be misused.

While the images were artificially generated, the harm caused was real. From a data protection perspective, this case highlights significant risks around unlawful processing, safeguarding failures, and loss of control over personal data.

Why This Matters Now

Generative AI tools are becoming widely available and easy to use. Grok, developed by xAI and integrated into the X platform, allows users to generate images and text through prompts.

Although these tools offer innovation, they also create new risks. When AI can generate realistic images of identifiable individuals, the potential for abuse increases sharply.

This case has attracted attention from UK ministers, including Liz Kendall, who described the images as deeply disturbing. Her comments reflect growing concern that existing safeguards are not keeping pace with AI development.

What Happened

The reports focus on the use of Grok to generate sexualised images of women and girls. In some cases, the individuals depicted were real people whose images had been altered or reimagined by the AI.

Grok can produce images based on text prompts. Where users reference real individuals, the tool may draw on existing online images or patterns learned during training.

Although the final output is synthetic, it still relates to identifiable individuals. That distinction is critical under data protection law.

Why This Is a Data Protection Issue

Under UK GDPR, personal data includes any information that relates to an identified or identifiable person. Images clearly fall within this definition.

In this case, the AI-generated images relate to real individuals. That means data protection law may apply to how the images are created, processed, stored, and shared.

Several UK GDPR principles are engaged, including:

• Lawfulness, fairness, and transparency
• Purpose limitation
• Data minimisation
• Integrity and confidentiality

Where images are sexualised, this may also involve special category data. Processing this type of data requires an even higher legal threshold.

Consent would be difficult to rely on here. The individuals affected did not agree to their data being used in this way. Other lawful bases are also unlikely to apply, particularly where the processing causes distress or harm.

What Organisations Using AI Should Be Doing

This case shows why AI governance cannot be an afterthought.

Organisations using generative AI should:

• Carry out DPIAs for AI systems that process personal data
• Restrict prompts and outputs that reference real individuals
• Implement strong content moderation and misuse controls
• Monitor outputs and user behaviour
• Provide clear reporting routes for harmful content

Staff should also understand that misuse of AI can create reportable data breaches. Our Data Protection Training supports teams in managing these risks.

Our View

This view has been shared by our Head Data Protection Consultant, Catarina Santos.

Like many people working in data protection, I see the benefits of modern digital tools every day. When used properly, they can improve services, widen access, and support innovation. However, recent revelations about the Grok AI image tool show what happens when powerful technology is released without proper safeguards, especially when children are the ones paying the price.

The statement from the Head of Hotline, Ngaire Alexander, is deeply troubling. Analysts have confirmed the existence of criminal imagery involving girls aged between 11 and 13, reportedly created using the Grok image tool and shared on dark web forums. While some of the initial images may fall under Category C under UK law, the most alarming issue is how they are being used as a starting point to create far more extreme Category A content using other tools.

As Alexander rightly said, “the harms are rippling out”. That phrase matters, because this is not a single failure or a contained incident. It is a chain of harm.

From a UK GDPR perspective, children’s personal data requires special care and protection. This includes images, likenesses, and anything that allows a child to be identified or realistically represented.

Once an image exists, even a fake one, it can be copied, altered, escalated, and reused. All of this can happen completely outside the control of the child or their family.

That is exactly what we are seeing here. One tool produces a sexualised image. Another tool turns it into something far more extreme. The original system may not host the final content, but that does not remove responsibility. UK GDPR expects organisations to think ahead. Where risks are obvious, particularly risks to children, organisations are expected to anticipate misuse. When those risks are ignored, that is not neutral. It is negligent.

Safeguarding cannot be an afterthought. This case highlights a recurring problem. Safeguards are often added only after harm has already occurred, rather than being built into products from the start.

Children do not get a second chance at privacy. Once an image is created and shared, the damage is permanent. The emotional impact, fear, shame, and long-term consequences do not disappear because an image was generated rather than photographed.

From a safeguarding perspective, allowing a product to be released to the public when it can be used to create sexualised images of children is simply unacceptable.

As Alexander said clearly, “There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children.” This is not anti-innovation. It is common sense.

Tools like generative AI are not automatically harmful. Many are impressive and, in the right hands, genuinely useful. However, capability without control is dangerous. Saying a tool can be used for good does not excuse weak age protections, ineffective safeguards, or ignoring known risks to children.

We would never accept this approach in education, healthcare, or social care. Digital products should not be treated differently.

Speaking as a data protection consultant, I find this deeply concerning. Not because technology exists, but because basic principles of safeguarding and UK GDPR appear to have been pushed aside.

Children should not be used as test cases for innovation. They should not be collateral damage. They should never be expected to carry lifelong consequences for someone else’s product decisions.

If a system cannot be confidently released without enabling harm to children, then it should not be released at all. This is not a radical position. It is the bare minimum.

FAQs

Does UK GDPR apply to AI-generated images?

Yes. If an image relates to an identifiable individual, it can be personal data, even if it is artificially generated.

Is consent required to use images in AI training?

In many cases, yes. Particularly where images are sensitive or involve children.

What should organisations do if AI generates harmful content?

They should act immediately, assess whether a data breach has occurred, and report to the ICO if required.

Contact Us

If your organisation uses AI or plans to deploy generative tools, we can help you assess risk and stay compliant. Our Data Protection Support, GDPR Audits, and Training services make AI governance practical and manageable. Contact us today.

Source

The Guardian, report on Grok AI generating fake images and the UK government response.