AI and Data Protection for UK Businesses
Written by Myles Dacres
AI is rapidly transforming how UK businesses operate, but it brings new data protection and cyber security risks that cannot be ignored. This practical guide explores the real challenges of AI adoption, from shadow AI to data leakage, and outlines clear steps organisations can take to use AI safely, remain compliant with UK GDPR, and maintain control.
AI and Data Protection for UK Businesses
By Amber Sivill, Junior Data Protection Consultant at Data Protection People
AI is already in the workplace, whether leadership has approved it or not. UK data shows business use is rising, with 26% of businesses reporting use of at least one AI technology in March 2026, while nearly half of employers who use or plan to use AI expect their business model to use or rely on it within three to five years. At the same time, wider workplace research suggests many employees are using their own tools without formal approval. For SMEs, that creates a familiar problem in a new form, productivity pressure on one side, data protection and cyber risk on the other.
From Data Protection People’s perspective, the answer is not a blanket ban, but instead the controlled adoption and oversight of AI tools. The Information Commissioner’s Office is clear that there is no AI exemption to data protection law, and the National Cyber Security Centre advocates that AI systems introduce distinct security risks that must be designed for, monitored, and managed. The practical goal is to let staff use AI where the benefit is real, while keeping personal data, confidential information, and security controls intact.
Why this matters now
The real issue is not only formal AI projects, but also shadow AI. Microsoft found that 78% of AI users bring their own tools to work, which is even more common in small and medium sized companies. This is particularly problematic because a quick prompt can become a security incident if staff paste in names, emails, case notes, HR material, complaints, contracts or commercial information. Cross border processing is often missed too. If personal data is sent, or simply made accessible, to a separate organisation outside the UK, the ICO treats that as a restricted transfer under UK GDPR. In parallel, the ICO has warned that wrongly relying on generative AI outputs as factually accurate information about individuals can lead to misinformation, reputational damage and other harms to individuals.
The ICO also notes that AI models can contain personal data and may embed training data in ways that could allow retrieval or disclosure. The NCSC adds that AI systems are exposed to both familiar cyber threats and AI specific threats such as prompt injection, data poisoning, and model inversion.
Ban or controlled adoption
An overarching ban has one advantage, it is simple to implement. But it is not realistic, and it can make the risk less visible by driving AI use underground. Controlled adoption is harder, but it is normally the better fit for UK SMEs because it accepts how work is realistically happening and gives you a route to govern it.
| Approach | Benefits | Risks | When appropriate |
|---|---|---|---|
| Ban | Clear message, lower immediate exposure in very high-risk areas | Workarounds, shadow AI, lost productivity, weak visibility | Highly sensitive processing, no approved secure tooling, active incident or regulatory concern |
| Controlled adoption | Better visibility, practical governance, safer productivity gains, staff trust | Needs policies, reviews, training, monitoring and resourcing | Most SMEs, where AI is already appearing in admin, marketing, IT or drafting work |
This is consistent with current evidence showing rising adoption, strong employee demand and the need for governance rather than denial.
What staff need to hear
Communication for staff has to be clearly communicated and easy to understand. Organisations should be able to tell individuals the rules of what is required, what they have to do and when to ask for guidance. That approach aligns with ICO expectations on accountability and NCSC guidance on awareness, secure use and human oversight. It is also crucial that we continue to support staff by providing quality and regular training.
Do
- Use only approved AI tools.
- Keep prompts generic where possible.
- Remove personal data and confidential detail unless the tool and use case have been approved.
- Check outputs before you use or share them.
- Escalate if you are unsure.
Do not
- Paste personal data, special category data, client files, HR records, passwords, source code or commercially sensitive material into public tools.
- Treat AI output as a fact without checking it.
- Use AI to make significant decisions about people without significant human review and approval.
- Buy or connect new AI tools without going through the approval route.
Controls and governance
For most organisations, the right control set is straightforward: keep an AI register, publish an AI policy, set an approval workflow, run DPIAs where risk justifies it, complete supplier due diligence, assess international transfers, and apply technical controls around access, logging and data loss prevention. ICO guidance is clear that a DPIA is required where new technology use is likely to result in high risk, and if in doubt, doing one is recommended. DSIT’s AI Management Essentials also directs SMEs towards an AI system record, an accessible AI policy, impact assessment, risk assessment and communication with employees.
Suggested AI policy headings
- Policy Statement
- Purpose and Scope
- Roles and Responsibilities
- Data Protection Considerations Around AI
- DPIAs
- Prior Consultation
- Privacy By Design and Default
- Data Protection Principles
- Rights
- Data Processors
- Restricted Transfers
- Cyber Security Risks
- Intellectual Property
- Accuracy of Output
- AI Dos and Don’ts
How to approve AI tools in practice
When someone in your organisation wants to use an AI tool, you do not need a complicated process, but you do need a consistent one.
Start with a simple question, will the tool involve personal data or sensitive information?
If the answer is no, carry out a basic check. Look at who provides the tool, whether it is secure, and whether it fits your business and the rules of your AI policy. If you are comfortable, you can allow a limited trial and keep it under review.
If the answer is yes, you need to slow things down and consider if the processing can comply with the UK GDPR.
- Review how the tool uses data
- Check where the data is stored, especially if it leaves the UK
- Carry out a DPIA if there is any real risk
- Review the supplier and their terms
Once that is done, decide:
- If the risks are too high, do not use the tool or look for an alternative
- If the risks are manageable, approve it with conditions, for example limiting what data can be used and requiring human review
After approval, the job is not finished. You should monitor how the tool is used, review it periodically, and be prepared to stop using it if risks change.
Immediate next steps
- Identify which AI tools staff are already using.
- Approve a short list of safer tools and incorporate this into an AI policy of approved tools.
- Send out staff communication informing them of the organisation’s stance on the use of AI as well as rules for them to consider.
- Add AI to your DPIA and procurement workflow.
- Review supplier terms, retention and training arrangements.
- Check for restricted transfers and document the outcome.
- Train managers first, then wider staff.
- Decide who owns AI governance internally.
These are practical first steps for SMEs and align with current ICO, NCSC and DSIT guidance.
Reasonable enforcement
You cannot police every prompt, and you do not need to. Reasonable enforcement means proportionate controls and visible accountability. Use SSO and approved tool access where you can, browser or network restrictions for clearly banned tools, logging sufficiently to investigate incidents, targeted audits in high-risk teams, and a simple route for staff to ask before using a new tool. The NCSC specifically recommends monitoring and log data that lets you audit use, investigate compromise and manage security incidents, while DSIT’s hidden AI risks work makes the same point from an organisational angle, successful AI governance is cultural as well as technical.
How Data Protection People supports clients
At Data Protection People, we are seeing AI move from a side conversation to a core compliance issue. We support clients with practical AI guidance, policy and framework design, DPIA and international transfer support, contract and supplier review, documentation templates, training and ongoing advisory support through our consultancy, toolkit and support services. Our wider view is simple, organisations should protect themselves first, but they should not pretend AI is going away. The sensible path is to embrace it with caution, good governance and clear boundaries.
We will also be discussing this on the Data Protection Made Easy podcast on Friday 24 April, joined by Caine Glancy and myself, Amber Sivill. The podcast is hosted live every Friday at lunchtime and is designed for practical discussion, not theory, which is exactly what this topic needs. If you are reading this after 24 April 2026, you will be able to listen to the full discussion via Spotify. Click here to listen to the Data Protection Made Easy podcast.
Key references
- ICO, Guidance on AI and Data Protection
- ICO, Tackling Misconceptions
- ICO, When do we need to do a DPIA?
- ICO, A Brief Guide to International Transfers
- NCSC, AI and Cyber Security
- NCSC, Secure Design for AI Systems
- DSIT, AI Management Essentials
- ONS, Business Insights and Impact on the UK Economy
- Microsoft, AI at Work Is Here. Now Comes the Hard Part
- Data Protection People, GDPR Support Desk