Catarina Santos
Data Protection Consultant and Consultant Manager
Cate is a Data Protection Consultant and Head of Consultancy at Data Protection People. Over the past three years, she has been working closely with consultants to help them grow and reach their full potential, while supporting clients by making data protection simple, engaging, and easy to put into practice.
She is also one of the hosts of Data Protection Made Easy, the Data Protection People podcast, where she shares practical insights in a clear and relatable way.
Get to Know Catarina
Cate is a Senior Data Protection Consultant and Head of Consultancy at Data Protection People. She joined the team in 2023 on the Support Desk and quickly moved into consultancy, working her way up to a senior role.
She works closely with consultants, supporting them to grow in confidence and develop into strong professionals. With clients, Cate understands that data protection isn’t always the easiest topic, so she focuses on making it simple, practical and easy to relate to.
Most of her work is with housing associations and charities, where she has built real expertise. She’s also particularly interested in how children’s data is used, especially with the rise of AI and social media.
Originally from Lisbon, Cate swapped sunshine for life in Leeds (and a good Sunday roast - fair trade, she says). Outside of work, she loves travelling, trying new foods (especially sushi, where budgets don’t apply), and meeting new people. A former swimmer for Sporting Clube de Portugal, she’s now back in the pool and rarely says no to anything new or fun.
Experience
Cate’s path into data protection started with a strong legal background and a willingness to try something new. She completed a Bachelor’s degree in Law and a Master’s in Law and Technology in Portugal, before beginning her career as a trainee lawyer in 2019. During that time, she worked across a range of areas including immigration, employment, civil, corporate and property law.
When COVID hit, Cate decided she wanted to move into something more fast-paced and forward-looking. That’s what led her to data protection.
She joined Data Protection People in 2023 as part of the Support Desk team. This gave her a great foundation, helping clients with everyday queries and getting a real feel for the challenges organisations face. From there, she moved into a junior consultancy role and has since progressed to Senior Consultant.
Today, Cate works with a wide range of clients, with a particular focus on housing associations and charities. She understands the pressures these organisations are under and supports them in a way that is practical and realistic, not overcomplicated.
She also has a strong interest in how children’s personal data is used, especially as things continue to change with AI and social media. It’s an area she cares about and keeps a close eye on.
She is also one of the hosts of Data Protection Made Easy, the Data Protection People podcast, where she shares practical insights in a clear and relatable way.
Cate is known for being detail-focused and holding herself to high standards. Whether she’s working with a small organisation or a large one, her approach stays the same -clear advice, careful work, and making sure things are done properly.
Data protection doesn’t need to be complicated - it just needs to make sense. If you support your team, stay curious, and focus on doing things properly, the rest follows.
Catarina Santos
Data Protection Consultant and Consultant Manager
Catarina's Posts
Weaponised SARs
What Are Weaponised SARs? Key Insights from 180 Data Protection Professionals
On Friday 10 April, the Data Protection Made Easy podcast hosted a live discussion on one of the fastest-growing challenges in information rights, weaponised Subject Access Requests, often referred to as weaponised SARs.
Led by Catarina Santos and Caine Glancy, the session attracted 180 live participants, with a highly active chat and more questions than could be answered in a single session.
This signals a clear shift. Weaponised SARs are no longer a niche issue. They are a growing operational challenge affecting organisations across housing, healthcare, local authorities and the private sector.
Subject Access Requests are increasingly being used strategically. Rather than purely supporting transparency, they are now being submitted alongside complaints, grievances, legal disputes and disrepair claims.
This does not remove the legal right of access. It does mean organisations must work harder to define scope, manage intent and respond in a way that is both compliant and proportionate.
If your organisation is already dealing with increasingly complex requests, our SAR Support Service helps teams manage Subject Access Requests efficiently and with confidence. Many organisations also benefit from wider governance support through our Data Protection Support Service and Outsourced DPO service.
Why are weaponised SARs rising?
During the session, Catarina highlighted that this trend is becoming more frequent and more disruptive.
As she explained, “Unfortunately, it’s becoming more regular and is definitely something that organisations are seeing on a very regular basis.”
The core issue is a tension between legal rights and strategic use. Individuals have a right to access their personal data, but some requests are clearly being used to apply pressure or gain leverage.
Caine reinforced this by highlighting a common pattern seen across organisations: “They only ask if they think there is a smoking gun.”
This reflects a wider shift. Many SARs are no longer exploratory, they are targeted, often driven by disputes or a belief that key evidence exists within organisational records.
The role of AI in weaponised Subject Access Requests
Artificial intelligence is accelerating this trend.
Catarina explained how AI tools are shaping behaviour: “They are relying a lot on ChatGPT and other AI platforms… SARs are something that you should always submit.”
Caine added: “Practically everybody within the meeting today has probably received a request that looks like it’s come from an AI platform.”
This creates a new challenge. Requests now often appear legally confident, broad in scope and poorly understood by the requester.
As a result, organisations are dealing not only with the initial request, but also repeated AI-generated follow-ups and challenges.
A member of the community commented, “We are seeing data subjects use AI more and more to contradict our responses. It’s becoming a real issue.”
This is one reason why having a practical SAR process matters more than ever. A clear workflow, strong template letters and the right internal escalation points can reduce risk and improve consistency. For organisations that need extra support, our SAR Support Service is designed to help with scoping, review, redaction and response management.
Real challenges shared by the data protection community
The live chat reinforced just how widespread this issue has become.
A member of the community commented, “Weaponised suits our situation. Customers will send us a SAR to delay actions or find us in the wrong.”
Another added, “Most of our requests ‘scream’ ChatGPT now.”
Another highlighted the operational frustration, commenting, “We spend so much time responding, just for it to be put back through AI and asked again in a different way.”
A recurring theme was expectation versus reality. Many requesters expect full disclosure of documents, while organisations must apply the law correctly and proportionately.
Solicitors, tone and pressure tactics in SARs
Another key discussion point was the role of solicitors and representatives.
Catarina noted that tone is often used strategically: “The tone is definitely to create fear among the people managing these requests.”
This is often combined with misunderstandings about the scope of a SAR.
A member of the community commented, “The lawyers advising them are oblivious of the fact that documents do not form part of a DSAR response.”
Another added, “Just because they ask for something, data protection still applies.”
This highlights a critical point for organisations. A SAR is a right to personal data, not a blanket right to all documents, emails or internal records.
That distinction sits at the heart of good SAR handling. It also links closely with broader compliance and governance practice, which is where services such as our Data Protection Support Service and Outsourced DPO service can help organisations build stronger foundations.
Why clarifying a SAR request is essential
One of the most important takeaways from the session was the need to clarify scope early.
Catarina advised: “Don’t be scared to clarify the request.”
Broad requests such as “all my personal data” can quickly become disproportionate if not narrowed.
She also reinforced a key legal distinction: “The right is to personal data, nothing more, nothing less.”
Clarification helps reduce unnecessary workload, focus on relevant data, improve response accuracy and manage expectations early.
A member of the community commented, “Provide everything you have on me is exhausting.”
The growing pressure on data protection teams
The discussion also highlighted the strain on internal teams.
Caine explained: “A lot of people do SARs individually… that might not be feasible anymore.”
This was strongly reflected in the chat.
A member of the community commented, “I’m just one person.”
Another added, “I have a team of 11 and it’s still not enough.”
Another said, “Many of ours are overdue as we are overwhelmed.”
This demonstrates a clear gap between legal expectations and operational reality.
Where internal resource is stretched, it often makes sense to bring in specialist support for complex or high-volume cases. Our SAR Support Service is built for exactly this, helping organisations reduce pressure on internal teams while maintaining a defensible and structured response process.
ICO guidance, challenges and uncertainty
The session also explored frustrations around regulatory guidance.
Caine said: “What would really help is more detailed guidance.”
Catarina added: “It’s too broad… it’s hard to define what it means in practice.”
The community echoed this.
A member of the community commented, “I wish the ICO would issue clear guidance from experiences like this.”
Another said, “It’s hard to know whether the ICO has received a complaint or not.”
This lack of clarity leaves organisations making difficult judgement calls without consistent, practical support.
How organisations should respond to weaponised SARs
While there is no single solution, several practical steps emerged from the discussion.
Organisations should build a practical SAR process that reflects real workflows, use clear templates for acknowledgements, clarifications and responses, clarify scope early to avoid unnecessary work, document decisions and search methodologies, and apply the law confidently and proportionately.
Caine summarised this well: “You’ve got to not be afraid to push back when things are getting too far.”
In practice, that often means having the right mix of process, confidence and support. Our SAR Support Service helps organisations manage difficult requests from initial scoping through to final response, while our Data Protection Support Service and Outsourced DPO service support wider compliance, governance and decision-making.
Why this conversation is not over, part two is coming soon
With 180 attendees and a highly engaged discussion, it became clear that one session was not enough.
Several topics require deeper exploration, including repeat SAR requests, metadata requests, grievance-led SARs, solicitor authority, search methodology and proportionality.
As Caine confirmed: “We’ll be picking apart some of these requests and taking it into a second session.”
That feels exactly right. Weaponised SARs are not a passing frustration. They reflect a broader shift in how data rights are being used, challenged and operationalised.
For anyone working in data protection, compliance, information governance or complaints handling, this is a conversation that is only becoming more important.
Need support with complex or weaponised SARs?
Weaponised SARs are not a temporary trend. They reflect a broader shift in how data rights are being used.
If your organisation is experiencing increasing SAR volumes, more complex or strategic requests, or growing pressure on internal teams, now is the time to review your approach.
Explore our SAR Support Service to see how we help organisations manage Subject Access Requests efficiently, accurately and with confidence.
You may also find it useful to explore our wider Data Protection Support Service and Outsourced DPO service for ongoing compliance support.
Frequently asked questions about weaponised SARs
What is a weaponised SAR?
A weaponised SAR is a Subject Access Request that appears to be used strategically, often alongside a complaint, grievance or dispute, rather than simply to understand how personal data is being processed.
Are weaponised SARs still valid?
Yes. A requester may still have a valid right of access even where the wider context is contentious. Organisations still need to assess the request properly, define scope and respond lawfully.
Can AI increase the number of SARs?
Yes. AI tools can make it easier for people to generate broad, legally worded requests and follow-up challenges, which can increase both the volume and complexity of SAR handling.
Do SARs give people the right to all documents?
No. A SAR is a right to personal data, not a blanket right to every document, email or report in which a person may appear.
Should organisations clarify broad SARs?
Yes. Clarifying a broad request can help narrow scope, reduce unnecessary work and ensure the response is more accurate and proportionate.
How can organisations manage complex SARs more effectively?
Organisations should use a practical SAR procedure, clear templates, documented search methods, confident decision-making and specialist support where internal capacity is limited.
The Growing Privacy Dilemma
End-to-end encryption, Instagram, and the growing privacy dilemma
Meta has announced that it will remove end-to-end encryption from Instagram direct messages, raising questions not just about privacy, but about safety, regulation, and where the balance should sit.
What is end-to-end encryption?
End-to-end encryption is a way of protecting communications so that only the sender and the recipient can read the content. In simple terms, messages are “locked” on the sender’s device and can only be “unlocked” on the recipient’s device. No one else – not even the platform providing the service – can access the content of those messages (in theory).
This is widely considered one of the strongest forms of privacy protection available online. It reduces the risk of data breaches, unauthorised access, and surveillance.
However, it also means that platforms themselves cannot monitor what is being shared.
What is happening with Instagram?
Meta has confirmed that it will discontinue end-to-end encrypted messages on Instagram from May 2026. The feature, which was introduced relatively recently, allowed users to send messages that even Meta could not read. Its removal means that Instagram messages will no longer have this level of protection.
The company has suggested that the feature had low usage.
At the same time, there is a wider context. Regulators and policymakers – particularly in the UK (particularly with the Online Safety Act), US and Europe – have been placing increasing pressure on platforms to improve child safety and prevent harm online.
End-to-end encryption has become a focal point in that debate, because it limits the ability of platforms to detect illegal or harmful content.
TikTok’s contrasting approach
Interestingly, not all platforms are moving in the same direction. As Mark and I mentioned on the podcast a couple of weeks ago, TikTok has publicly stated that it does not intend to introduce end-to-end encryption for direct messages.
Its reasoning is clear: encryption of this kind can make it harder to detect harmful behaviour, including abuse and exploitation: without visibility of message content, both platforms and law enforcement may struggle to investigate concerns. In other words, TikTok is allegedly prioritising safety and oversight over maximum privacy in messaging.
The dilemma: privacy vs protection
This brings us to the core issue and one that organisations, regulators, and society more broadly are still grappling with. On this topic and as mentioned on the podcast, we are facing a massive dilemma:
On one hand, this mechanism is a powerful weapon for safeguards challenges, supporting confidentiality and reducing risks of hacking and misuse of data. On the other, it can limit the detection of harmful or illegal activity, can create challenges for safeguarding children and vulnerable users (that can be anyone, depending on the context as Charlotte mentioned on the podcast) and the big one being that reduces the ability of platforms to intervene proactively.
In the UK, this tension is reflected in legislation such as the Online Safety Act, which places duties on platforms to protect users – particularly children – from harm, while also raising concerns about how that can be achieved without weakening encryption.
From a data protection standpoint, this issue sits at the intersection of several key principles:
- Confidentiality and security (protecting personal data)
- Accountability (ensuring organisations can manage risks)
- Protection of vulnerable individuals, particularly children
There is, however, one thing that is clear: there is no one-size-fits-all answer!!
Strong encryption aligns closely with UK GDPR principles around security and integrity; but organisations also have obligations to mitigate risks and prevent harm particularly where children are concerned.
The shift by Meta – and the contrasting stance from TikTok – highlights that there is no settled industry position.
End-to-end encryption is often framed as a purely technical feature but it seems more than that: Meta’s decision to remove it from Instagram, alongside TikTok’s refusal to adopt it, shows just how complex that balance has become. The challenge ahead is not simply whether to use encryption, but how to reconcile two equally important goals:
protecting people’s privacy, and protecting people from harm.
Important links:
https://www.bbc.co.uk/news/articles/cly2m5e5ke4o
https://mashable.com/article/instagram-meta-end-to-end-encryption
AI-Generated Fake Images and Data Protection: What the Grok Case Reveals
AI-Generated Fake Images and Data Protection: What the Grok Case Reveals
Recent reports have raised serious concerns after the AI chatbot Grok was used to generate fake images of women and girls appearing undressed, without their consent. The incident has drawn criticism from UK ministers and reignited debate about how generative AI tools can be misused.
While the images were artificially generated, the harm caused was real. From a data protection perspective, this case highlights significant risks around unlawful processing, safeguarding failures, and loss of control over personal data.
Why This Matters Now
Generative AI tools are becoming widely available and easy to use. Grok, developed by xAI and integrated into the X platform, allows users to generate images and text through prompts.
Although these tools offer innovation, they also create new risks. When AI can generate realistic images of identifiable individuals, the potential for abuse increases sharply.
This case has attracted attention from UK ministers, including Liz Kendall, who described the images as deeply disturbing. Her comments reflect growing concern that existing safeguards are not keeping pace with AI development.
What Happened
The reports focus on the use of Grok to generate sexualised images of women and girls. In some cases, the individuals depicted were real people whose images had been altered or reimagined by the AI.
Grok can produce images based on text prompts. Where users reference real individuals, the tool may draw on existing online images or patterns learned during training.
Although the final output is synthetic, it still relates to identifiable individuals. That distinction is critical under data protection law.
Why This Is a Data Protection Issue
Under UK GDPR, personal data includes any information that relates to an identified or identifiable person. Images clearly fall within this definition.
In this case, the AI-generated images relate to real individuals. That means data protection law may apply to how the images are created, processed, stored, and shared.
Several UK GDPR principles are engaged, including:
• Lawfulness, fairness, and transparency
• Purpose limitation
• Data minimisation
• Integrity and confidentiality
Where images are sexualised, this may also involve special category data. Processing this type of data requires an even higher legal threshold.
Consent would be difficult to rely on here. The individuals affected did not agree to their data being used in this way. Other lawful bases are also unlikely to apply, particularly where the processing causes distress or harm.
What Organisations Using AI Should Be Doing
This case shows why AI governance cannot be an afterthought.
Organisations using generative AI should:
• Carry out DPIAs for AI systems that process personal data
• Restrict prompts and outputs that reference real individuals
• Implement strong content moderation and misuse controls
• Monitor outputs and user behaviour
• Provide clear reporting routes for harmful content
Staff should also understand that misuse of AI can create reportable data breaches. Our Data Protection Training supports teams in managing these risks.
Our View
This view has been shared by our Head Data Protection Consultant, Catarina Santos.
Like many people working in data protection, I see the benefits of modern digital tools every day. When used properly, they can improve services, widen access, and support innovation. However, recent revelations about the Grok AI image tool show what happens when powerful technology is released without proper safeguards, especially when children are the ones paying the price.
The statement from the Head of Hotline, Ngaire Alexander, is deeply troubling. Analysts have confirmed the existence of criminal imagery involving girls aged between 11 and 13, reportedly created using the Grok image tool and shared on dark web forums. While some of the initial images may fall under Category C under UK law, the most alarming issue is how they are being used as a starting point to create far more extreme Category A content using other tools.
As Alexander rightly said, “the harms are rippling out”. That phrase matters, because this is not a single failure or a contained incident. It is a chain of harm.
From a UK GDPR perspective, children’s personal data requires special care and protection. This includes images, likenesses, and anything that allows a child to be identified or realistically represented.
Once an image exists, even a fake one, it can be copied, altered, escalated, and reused. All of this can happen completely outside the control of the child or their family.
That is exactly what we are seeing here. One tool produces a sexualised image. Another tool turns it into something far more extreme. The original system may not host the final content, but that does not remove responsibility. UK GDPR expects organisations to think ahead. Where risks are obvious, particularly risks to children, organisations are expected to anticipate misuse. When those risks are ignored, that is not neutral. It is negligent.
Safeguarding cannot be an afterthought. This case highlights a recurring problem. Safeguards are often added only after harm has already occurred, rather than being built into products from the start.
Children do not get a second chance at privacy. Once an image is created and shared, the damage is permanent. The emotional impact, fear, shame, and long-term consequences do not disappear because an image was generated rather than photographed.
From a safeguarding perspective, allowing a product to be released to the public when it can be used to create sexualised images of children is simply unacceptable.
As Alexander said clearly, “There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children.” This is not anti-innovation. It is common sense.
Tools like generative AI are not automatically harmful. Many are impressive and, in the right hands, genuinely useful. However, capability without control is dangerous. Saying a tool can be used for good does not excuse weak age protections, ineffective safeguards, or ignoring known risks to children.
We would never accept this approach in education, healthcare, or social care. Digital products should not be treated differently.
Speaking as a data protection consultant, I find this deeply concerning. Not because technology exists, but because basic principles of safeguarding and UK GDPR appear to have been pushed aside.
Children should not be used as test cases for innovation. They should not be collateral damage. They should never be expected to carry lifelong consequences for someone else’s product decisions.
If a system cannot be confidently released without enabling harm to children, then it should not be released at all. This is not a radical position. It is the bare minimum.
FAQs
Does UK GDPR apply to AI-generated images?
Yes. If an image relates to an identifiable individual, it can be personal data, even if it is artificially generated.
Is consent required to use images in AI training?
In many cases, yes. Particularly where images are sensitive or involve children.
What should organisations do if AI generates harmful content?
They should act immediately, assess whether a data breach has occurred, and report to the ICO if required.
Contact Us
If your organisation uses AI or plans to deploy generative tools, we can help you assess risk and stay compliant. Our Data Protection Support, GDPR Audits, and Training services make AI governance practical and manageable. Contact us today.
Source
The Guardian, report on Grok AI generating fake images and the UK government response.
GDPR Radio – Digital Omnibus, Personal Data and SAR Reform
Digital Omnibus, Personal Data Changes and What They Mean for You
Episode 227 of the Data Protection Made Easy Podcast hosted by experts at Data Protection People. This episode was hosted live via Microsoft Teams in front of a live audience of listeners.
What We Covered in This Session
A Catch Up from Caine and Catarina
The episode opens with a look at what the team have been working on. Catarina reflects on a very busy week supporting a major client project alongside her team. Caine shares updates on ongoing STAIRs sessions for social housing providers and hints at an in person STAIRs event coming soon.
Both hosts also discuss their guest appearance on another organisation’s podcast where they explored how users understand privacy information, how organisations communicate their obligations and why cross functional training is so important.
The Digital Omnibus Package Explained
The main focus of the episode is the European Commission’s Digital Omnibus package, announced on 19 November. The discussion highlights several of the most significant proposals, including:
1. A New Approach to Personal Data
The proposal introduces a major shift. Information would be classed as personal data only if the controller has means reasonably likely to identify the individual.
The team explore:
- how this could narrow the scope of personal data
- what this means for indirect identifiers and pseudonymised data
- how case law from Europe is already pushing towards this direction
- how this might affect UK organisations if mirrored in future reforms
2. Changes to Data Breach Reporting
Catarina outlines proposals that:
- raise the threshold so only high risk breaches need regulator notification
- extend the deadline from 72 to 96 hours
Caine questions whether reducing low risk reporting could hide patterns of poor practice and the group debate what this means for real world compliance.
3. Reforms to Cookie Rules
The Digital Omnibus seeks to simplify cookie requirements by reducing reliance on consent for low risk purposes such as security and aggregated analytics. The team draw comparisons with the UK DUA Act and consider how consent fatigue has shaped this direction.
Insights from Guest Contributor David Appleyard
David shares two important observations:
1. SAR Purpose Tests
Under the new proposals, organisations may reject or charge for a SAR if the purpose is not to access personal data, for example in an employment dispute. This could be a significant change for many organisations that currently process large volumes of tactical or grievance based SARs.
2. High Risk AI Processing
David explains that the EU is pushing back deadlines for identifying high risk AI processing due to a lack of clear guidance, with expectations now set for no later than December 2027.
CNIL Research on Selling Personal Data
Caine introduces a study from the CNIL which found that 65 percent of surveyed French citizens would sell their personal data for between 1 and 100 euros. The hosts explore:
- why people undervalue their own data
- how advertising, profiling and AI training increase the true value
- the growing need for public awareness and transparent communication
Looking Ahead
The session closes with a reminder that the next podcast will explore data retention, followed by an update that the team are working on the new in house DPP studio.
About the Data Protection Made Easy Community
Our podcast community is one of the most active privacy networks in the UK with more than 150 regular live attendees and over 1,600 subscribers across all audio platforms. Joining the community gives you access to:
- free weekly live sessions with the chance to ask questions
- practical guidance from experienced consultants
- early access to slides and resources
- networking with other privacy and security professionals
- invites to in person events, workshops and sector focused discussions
- exclusive content only available to our community members
Attending live offers clear benefits. You can join the conversation, shape the discussion, raise real world challenges and take part in polls, chat and Q and A. Many listeners tell us they get far more value from attending live than listening back later.
We also have a strong line up of sessions taking us through to the end of the year, covering topics such as data retention, AI risk, international transfers, STAIRs, marketing compliance and more.
If you are not yet part of the Data Protection Made Easy community, you can join for free and get involved straight away.
Navigating the Digital Omnibus: A UK GDPR Briefing for Busy Data Teams
Navigating the Digital Omnibus: A UK GDPR Briefing for Busy Data Teams
On 19 November 2025, the European Commission published its Digital Omnibus package. This set of proposals would update several major EU digital laws, including the GDPR, ePrivacy framework, AI Act, Data Act and Data Governance Act. The goal is to simplify compliance and support innovation while maintaining the fundamental rights and protections established in EU law.
For UK organisations with customers in the EU or who transfer EU personal data, these proposals are strategically important. Although the Omnibus is an EU initiative, it will shape expectations in the wider regulatory environment. It may also influence the UK’s own reforms, including the Data Use and Access Act 2025.
Key Elements of the Digital Omnibus
The Digital Omnibus contains two draft regulations. One amends the AI Act and the other makes cross cutting updates across digital and data laws. The proposals focus on three core areas: data protection, cybersecurity and breach reporting, and artificial intelligence.
AI Act Adjustments
The Omnibus introduces several changes intended to reduce the early compliance burden on organisations developing or deploying high risk AI systems.
- A one year extension to some high risk AI compliance deadlines.
- Expansion of SME friendly regimes to larger mid sized organisations.
- Removal of some obligations, such as AI literacy requirements, for certain providers.
For example, deadlines linked to training and validation obligations would shift to late 2027 rather than 2026. This gives businesses more time to meet new technical standards and reduces early compliance pressure for AI developers working within EU markets.
A Narrower Definition of Personal Data
One of the most significant proposals is a revised definition of personal data. Under the current GDPR, any information that could directly or indirectly identify an individual is treated as personal. This includes names, emails, IP addresses, device IDs and pseudonymous data.
The Omnibus moves to a controller centred test. Data will only be personal if the organisation processing it has the means that are reasonably likely to be used to identify a person.
In practice this means:
- Highly pseudonymised data or indirect identifiers may fall outside scope if the controller cannot realistically link them to a person.
- Direct identifiers or data that the organisation could reasonably use to single someone out will remain personal.
- Judging identifiability becomes relative to each controller’s realistic capabilities.
This approach aligns with recent case law, including SRB v Edenred. It may reduce compliance obligations for analytics and telemetry datasets, but introduces subjectivity. Organisations will need strong documentation to justify how they assess identifiability.
Special Category Data: Direct Versus Inferred
The Omnibus narrows what is considered special category data under Article 9. Only data that directly reveals sensitive characteristics, such as health, religion or political opinions, would fall under the enhanced protections.
Inferences or predictions about sensitive traits, such as deducing health conditions through profiling, would not automatically count as special category data.
The proposals also allow limited exceptions for processing special category data to train or operate AI systems and for biometric data processed on user devices under strict conditions.
Right of Access (DSAR) Reform
The Omnibus provides controllers with stronger grounds to refuse or charge for requests that are abusive or manifestly excessive. This aims to reduce the burden of DSARs used strategically in litigation or to disrupt operations.
Although “abusive” is not tightly defined, the approach mirrors changes already seen in the UK under the Data Use and Access Act. UK organisations will still need clear internal criteria to avoid rejecting legitimate requests.
Breach Reporting Thresholds and Timescales
Under the existing GDPR, controllers must report any breach that risks individuals’ rights within 72 hours. The Omnibus proposes raising this threshold so that only high risk breaches must be reported, and extends the reporting window to 96 hours.
The proposals also introduce an EU wide incident reporting portal operated by ENISA. This would consolidate reporting under GDPR, NIS2, DORA and other frameworks.
UK breach reporting rules remain unchanged. Notifications must still be made without undue delay and within 72 hours unless UK legislation is updated in future.
DPIAs, Automated Decisions and Cookies
The Omnibus includes further measures intended to simplify and standardise compliance:
- Harmonised DPIA and breach notification templates to be published by the EDPB.
- Relaxation of restrictions on automated decision making when contractually necessary.
- Broader exemptions under ePrivacy rules for analytics and security cookies.
- Requirement for browsers and operating systems to respect user privacy preference signals once standards are established.
These measures would reduce the volume of consent banners and bring greater technical consistency to DPIAs and cookie compliance. This direction is similar to recent UK guidance on consent and preference management.
Opportunities and Risks
The Omnibus aims to create clearer legal grounds for AI development and reduce administrative burden for organisations. Many businesses welcome the potential for fewer overlapping obligations and more predictable compliance requirements.
There are also trade offs:
- Narrowing the definition of personal data could create inconsistent protections across sectors.
- Higher thresholds for breach reporting may reduce visibility of lower impact incidents.
- DSAR reforms risk uncertainty without robust internal guidance.
For UK organisations, divergence between EU and UK regimes is likely to increase. This will require more precise policy alignment, updated data sharing contracts and consistent governance.
What UK GDPR Teams Should Do Now
- Review data protection policies and contracts to reflect upcoming EU changes.
- Update data maps and inventories to assess whether datasets may fall outside scope under the new definition.
- Refine DSAR triage processes to identify abusive or excessive requests.
- Monitor breach handling procedures to ensure EU and UK requirements remain aligned.
- Keep track of regulatory developments from both the EU and UK.
Looking Ahead
The Digital Omnibus is still under negotiation by the European Parliament and Council. If adopted, it will represent a substantial shift in the EU digital regulatory landscape and highlight growing divergence from UK law following the DUAA.
Whether or not the UK adopts similar measures, any organisation operating across both jurisdictions will need to adjust its practices. Preparing early will reduce risk, support innovation and maintain compliance.
The Omnibus signals a wider regulatory trend. Policymakers are recalibrating privacy and digital governance for an AI driven economy. While some protections may narrow, many proposals aim to reduce friction and bring clarity for businesses. UK organisations should begin planning now to remain compliant and competitive.
Sources
- European Commission: Digital Omnibus Press Release
- Digital Omnibus AI Regulation Proposal
- Digital Package Overview
- Case C 413 23 P: SRB v Edenred
- Skadden: First Impressions on Digital Omnibus
- Freshfields: Key Changes under the Digital Omnibus
Cookies in 2025 – Trick or Treat, Part Two
Cookies in 2025 – Trick or Treat, Part Two
This Halloween special of the Data Protection Made Easy Podcast dives into two hot topics, consent or pay and cookieless advertising. Watch or listen on demand below.
Recorded: Friday 7 November 2025
Hosts: Catarina Santos with guests Oluwagbenga Onojobi (Gbenga) and Holly Miller, cameo from Phil Brining
In this 30 minute session we focus on the implications of consent or pay under UK GDPR and what the move to cookieless advertising means in practice. We also touch on recent regulatory opinions and enforcement trends. The aim is simple, give you practical clarity that reduces risk without hurting conversions.
What we cover
- The implications of consent or pay under UK GDPR and related data protection principles
- How the transition to cookieless advertising affects the lawful use of personal data
- Recent regulatory opinions and enforcement trends in the adtech space
Key takeaways
- A clearer understanding of the data protection framework as it applies to modern advertising
- Insights into compliance risks and regulator expectations
- Discussion of the challenges organisations face when aligning commercial practices with data protection law
Your hosts
Catarina Santos with guests Oluwagbenga Onojobi (Gbenga) and Holly Miller, cameo from Phil Brining.
Join the Data Protection Made Easy community
One of the UK’s largest data protection communities, more than 1,500 subscribers, over 200 episodes on major audio platforms. Join for free, get weekly live invites, monthly newsletters, and first access to in person events.
Missed Part One
If you missed our first conversation on cookies, you can catch up on that episode, along with more than 200 others, on the Data Protection Made Easy Podcast.
Bristol City Council Faces Enforcement over SAR Failures
Bristol City Council Faces Enforcement over SAR Failures
The Information Commissioner’s Office (ICO) has issued a formal enforcement notice to Bristol City Council after uncovering serious, ongoing failures in how the Council manages Subject Access Requests (SARs). This action follows years of complaints and evidence of systemic delays. The message from the ICO is clear: organisations that fail to take SAR compliance seriously will face enforcement.
SAR Failures at Bristol City Council
The ICO’s investigation revealed that Bristol City Council has struggled with a growing backlog of SARs since 2020. A Subject Access Request gives individuals the right to ask for a copy of their personal data and to understand how that data is used. Failing to respond in time undermines public trust and breaches data protection law.
Between April 2023 and January 2025, the ICO received 63 complaints from individuals waiting too long for responses. Many reported that the delays caused them harm and distress, leaving them unable to resolve personal matters or defend their rights. The ICO found that the Council had made limited progress despite repeated engagement and guidance. As a result, enforcement became the only option.
Why SARs Matter
SARs are not a formality. They are a cornerstone of data protection rights under the UK GDPR and Data Protection Act 2018. By making a SAR, an individual can see exactly what information an organisation holds about them, why it holds that data, and who it is shared with. For some, this is about transparency and reassurance. For others, especially vulnerable individuals, a SAR can directly affect access to housing, social services, or justice.
When organisations delay or ignore SARs, people lose trust and may face real-world consequences. The ICO has repeatedly emphasised that SAR compliance is fundamental. Sally-Anne Poole, Head of Investigations at the ICO, summarised the issue:
“Subject access requests are a fundamental right that allows people to know what information organisations hold about them and how it is being used. Despite our repeated engagement with Bristol City Council over a sustained period of time, limited progress has been made to clear a backlog of requests. Our investigation has found that the Council’s approach towards compliance demonstrates a poor organisational attitude towards data rights and compliance with the law.”
What the Council Must Do
The enforcement notice issued to Bristol City Council sets out a strict list of actions. These include:
- Contacting all individuals with overdue SARs to explain the delays and confirm when they can expect a response.
- Clearing the backlog by specific deadlines, ensuring that the oldest SARs (dating back to 2022) are completed within 30 days.
- Providing the ICO with weekly progress updates until the backlog is fully resolved.
- Publishing an action plan within 90 days that clearly sets out responsibilities, priorities and timelines.
- Making lasting organisational changes within 12 months to prevent SAR delays in future. This may require hiring more staff, investing in resources, and delivering staff training.
The ICO’s demands highlight that responding to SARs is not simply an administrative task. Councils and public bodies must show they can manage the process consistently, transparently, and within the one-month statutory deadline.
Lessons for Other Organisations
Bristol City Council’s enforcement notice should serve as a warning for all public authorities and organisations. The ICO expects SARs to be treated as a legal obligation, not an afterthought. Failing to respond on time risks enforcement, reputational damage, and potential fines.
Every organisation should ask itself some key questions:
Do we have a clear process for managing SARs from start to finish?
Do we have enough staff, technology and resources to respond within the legal timeframe?
Are we training employees so they understand SAR rights and know how to respond appropriately?
Can we evidence our compliance if the ICO asks?
If the answer to any of these questions is “no,” then urgent action is needed. The ICO has shown that it will not hesitate to escalate matters where organisations repeatedly fail to meet their obligations.
The Wider Context of SAR Compliance
SAR backlogs are not unique to Bristol. Many councils, charities, and businesses struggle with the volume and complexity of requests. However, the law is clear: SARs must be answered within one month unless an extension is justified. Even then, organisations must explain the reasons for any delay to the individual making the request.
Technology can help reduce SAR risks. Case management systems, redaction tools, and specialist support can speed up responses and reduce errors. But technology alone is not enough. Organisations also need strong governance, clear policies, and a culture that treats data rights as a priority. Without these, the risk of enforcement grows.
Our View
At Data Protection People, we believe the Bristol City Council case highlights two critical points. First, SARs are central to data protection compliance and public trust. Second, enforcement action is not limited to fines; the ICO will impose detailed corrective measures when organisations fail repeatedly. Councils, businesses, and charities should take this case as a clear sign that SAR processes must be robust, well-staffed, and monitored closely.
We recommend that organisations run regular compliance checks, train staff to handle SARs effectively, and seek support where needed. By doing so, you protect both your organisation and the people whose data you process.
Contact Us
If your organisation is struggling with Subject Access Requests, we can help. Our SAR Support service provides expert assistance to manage requests on time and in line with the law. We also offer GDPR Audits to identify gaps, ongoing compliance support, and staff training to build confidence in handling SARs. Contact us today to protect your organisation and deliver on data rights.
10 Years of Data Protection People
Celebrating 10 Years of Data Protection People & 5 Years of the Data Protection Made Easy Podcast
Last week we marked not one, but two major milestones, 10 years of Data Protection People and the 5th birthday of the Data Protection Made Easy Podcast. To celebrate, we hosted a special live session with Philip Brining, Caine Glancy, Catarina Santos, and returning host Joe Kirk. Together, we looked back at the Top 10 Most Streamed Episodes from the past five years, revisiting the conversations that have shaped our community.
Key Themes from the Session
- Subject Access Requests (SARs) – still one of the most complex and frequently discussed areas of data protection.
- Data Protection Impact Assessments (DPIAs) – exploring challenges around risk, practicality, and when a DPIA is truly needed.
- Legislative Changes – including Brexit, the Data Protection and Digital Information Bill, and the new DUA Act.
The team also reflected on why topics like ROPA and audits don’t always feature as highly among listeners, and why broad themes resonate more strongly than sector-specific discussions.
Insights from Our Community
Our special guest Joe Kirk shared valuable insights from moving into an in-house DPO role, including the importance of tackling cookie compliance and ensuring correct ICO registration. The panel also discussed the ICO’s new guidance on complaints handling and recognised legitimate interests, highlighting the practical steps organisations should take ahead of expected implementation in June 2026.
The Return of Weekly Podcasts
To celebrate our 10-year anniversary and the continued growth of our community, we are excited to announce that the Data Protection Made Easy Podcast is returning to a weekly schedule. Every Friday at lunchtime, we’ll be live with fresh discussions, community insights, and practical guidance for data protection professionals.
You can sign up on our Events Page to join future live sessions, or contact us here to subscribe and become part of the UK’s biggest data protection community.
Listen Back to the Anniversary Episode
If you missed it live, you can catch up now on Spotify using the player below:
Here’s to 10 years of making data protection easier, and 5 years of building a community where professionals can learn, share, and grow together. Thank you to everyone who has been part of the journey so far.