Snapchat’s Generative AI Features: A Data Protection Perspective

Written by Data Protection People

Snapchat’s new generative AI features raise important data protection concerns. We explain what this means for user privacy, children’s data, and UK GDPR compliance.

Snapchat’s Generative AI Features A Data Protection Perspective

Snapchat’s Generative AI Features: A Data Protection Perspective

Snapchat has introduced new generative AI features designed to enhance creativity across the platform. These tools allow users to generate images and creative content using AI-powered prompts.

While these features may appear playful, they raise important questions about how personal data is processed, reused, and controlled. This is particularly relevant given Snapchat’s large user base of children and young people.

Why This Matters Now

Generative AI is moving quickly from specialist tools into everyday platforms. When AI features are built into widely used social media apps, the scale and impact of data processing increases immediately.

These tools rely on user inputs such as text prompts, images, and interaction data. In many cases, this information qualifies as personal data under UK GDPR.

From a data protection perspective, this is not a minor feature update. It represents a fundamental change in how data is processed.

What Snapchat’s Generative AI Features Do

According to guidance from Snapchat, its generative AI features allow users to create AI-generated images and creative content using prompts.

To operate, these tools may process:

• Text prompts entered by users
• Images and visual content
• Interaction data linked to how features are used

Although the outputs are AI-generated, the inputs often come directly from users. This means personal data may be involved at multiple stages.

Automatic AI Settings After App Updates

A key concern with Snapchat’s generative AI features is how the relevant setting is enabled.

When users update the Snapchat app, the option that allows Snap to use public content for generative AI purposes is automatically switched on by default. This means users may unknowingly allow their images, videos, audio, and text to be used to develop and improve AI systems.

In practice, many users will only discover this setting if they actively search for it within their privacy controls. There is no guarantee that users fully understand what has been enabled or the consequences of leaving the setting on.

Snapchat Generative AI settings showing public content use enabled by default

Screenshot: Snapchat’s Generative AI settings showing the option to allow use of public content, which is enabled automatically after app updates.

From a data protection perspective, this raises serious questions about fairness and transparency. UK GDPR requires organisations to be clear and upfront about how personal data is used, particularly where processing is optional or goes beyond what users would reasonably expect.

Automatically enabling AI-related data use places the burden on users to opt out, rather than asking them to opt in. This approach is difficult to justify, especially where children and young people are involved.

AI-Generated Images and Advertising Use

There is also a wider issue around how AI-generated content may be used beyond improving AI systems.

Snapchat’s terms indicate that AI-generated images informed by user data, including facial features, may be used in advertising or promotional contexts. While this does not involve publishing a user’s original photo, it may involve AI-generated images based on their likeness.

Many users would reasonably expect their content to remain within the app. Far fewer would expect their appearance to inform advertising content.

Under UK GDPR, processing must be fair and align with user expectations. Where use feels surprising or intrusive, transparency becomes critical.

Children’s Data and Higher Risk

Snapchat is widely used by children and teenagers. UK GDPR gives children’s personal data additional protection.

Generative AI tools that process images or creative inputs from children should be treated as high risk. Potential issues include loss of control over images, inappropriate outputs, and reuse of data in unexpected ways.

Where these risks exist, organisations are expected to carry out a Data Protection Impact Assessment before deployment.

Transparency, Control, and User Understanding

One of the biggest challenges with generative AI is explainability. Users may not understand what happens to their data once it is entered into an AI tool.

Organisations should clearly explain whether content is used to train AI models, how long it is retained, whether it is shared, and how users can exercise their rights.

Frequently Asked Questions

Is generative AI on Snapchat covered by UK GDPR?

Yes. If generative AI features process personal data, UK GDPR applies, even when outputs are AI-generated.

Does Snapchat need a DPIA for these features?

Yes, in many cases. Where AI processing is likely to pose high risk, particularly to children, a DPIA is expected.

Can AI-generated images still be personal data?

Yes. If an image relates to or is based on an identifiable person, it may still qualify as personal data.

What should parents be aware of?

Parents should understand how AI features work, what data is used, and what controls are available for children.

Our View

At Data Protection People, we see generative AI as a powerful tool. However, power comes with responsibility.

When AI is introduced into platforms used heavily by children, privacy by design, transparency, and genuine user choice must come first.

Innovation works best when people trust how their data is handled.

Sources

Snapchat Help Centre, “Generative AI on Snapchat”

https://help.snapchat.com/hc/en-gb/articles/25494876770580-Generative-AI-on-Snapchat

404 Media, reporting on Snapchat’s use of AI-generated images in advertising contexts

https://www.404media.co/snapchat-reserves-the-right-to-use-ai-generated-images-of-your-face-in-ads/