AI and Privacy: Why the Signal Founder Is Concerned
The founder of Signal warns that modern AI systems pose serious privacy risks. Learn what this means for UK GDPR compliance.
AI and Privacy: Why the Signal Founder Is Concerned
As artificial intelligence tools become more powerful, concerns about privacy are growing just as fast. The founder of Signal, the privacy-focused messaging app, has now turned his attention to what he sees as a fundamental problem with modern AI systems: the way they collect, process, and retain personal data.
This intervention is significant. It comes from a figure long associated with privacy-first design and secure communications. For organisations using AI, the message is clear. Innovation cannot come at the cost of data protection principles.
Why This Matters Now
AI tools increasingly rely on vast datasets to function. These datasets often include personal data, scraped from public sources or generated through user interaction. In many cases, individuals have little understanding of how their data is used or how long it is retained.
As AI becomes embedded in everyday products and services, privacy risks scale quickly. Decisions once made by humans are now automated. Outputs can be opaque, difficult to challenge, and hard to explain.
From a UK GDPR perspective, this creates clear tension with principles such as transparency, data minimisation, and accountability.
What the Signal Founder Is Saying
Moxie Marlinspike, the founder of Signal, has criticised the way many AI systems are built today. His concern is not about AI itself, but about the data practices behind it.
He argues that many AI models operate by collecting and centralising large amounts of data. This creates risk. Once data is stored, copied, or reused, it becomes difficult to control. Even well-intentioned systems can expose individuals to harm if safeguards are weak.
His comments reflect a broader concern shared by regulators and privacy professionals. AI systems often prioritise performance and scale over privacy by design.
Why This Is a Data Protection Issue
Under UK GDPR, organisations must have a lawful basis for processing personal data. They must also be clear about purpose, limit data collection, and protect individuals’ rights.
Many AI systems struggle to meet these requirements. Common issues include:
• Unclear or overly broad purposes for data use
• Excessive data collection to “train” models
• Limited transparency about how decisions are made
• Difficulty responding to rights requests, such as access or erasure
If an organisation cannot explain what data an AI system uses or why, compliance becomes difficult. Accountability does not disappear simply because processing is automated.
The Problem with Centralised AI Models
One of the key concerns raised is centralisation. Many AI systems rely on central servers that process and store user data at scale.
This approach increases risk. A single breach, misuse, or policy change can affect millions of people. It also concentrates power, leaving individuals with little control over how their data is used.
From a data protection standpoint, centralisation runs counter to privacy by design. UK GDPR encourages organisations to reduce risk at source, not simply manage it after the fact.
What Privacy-First AI Could Look Like
The alternative is not to reject AI, but to rethink how it is built. Privacy-first approaches include:
• Processing data locally rather than centrally
• Limiting data retention by default
• Avoiding unnecessary collection of personal data
• Designing systems that work without profiling individuals
These principles mirror long-standing data protection requirements. They are not new, but they are often ignored in the rush to deploy AI at scale.
What This Means for Organisations
Organisations using or considering AI should take this warning seriously. Regulators are increasingly clear that AI does not sit outside data protection law.
Practical steps include:
• Understanding what personal data AI systems use
• Carrying out Data Protection Impact Assessments for high-risk AI
• Ensuring transparency for users and customers
• Training staff on AI-related data protection risks
Our Data Protection Support and Training services help organisations navigate these challenges without stalling innovation.
Our View
At Data Protection People, we see this intervention as timely and necessary. AI systems are shaping decisions that affect real people, often without meaningful oversight.
Privacy should not be treated as a technical inconvenience. It is a legal requirement and a trust issue. Organisations that embed data protection into AI design from the outset are far better placed to innovate responsibly.
FAQs
Does UK GDPR apply to AI systems?
Yes. If an AI system processes personal data, UK GDPR applies.
Is all AI high risk?
No. Risk depends on how the system is used and what data it processes.
What should organisations do first?
Start by understanding data flows and carrying out a risk assessment.
Contact Us
If your organisation is using AI or planning to do so, we can help you manage privacy risks and stay compliant. Our Data Protection Support, GDPR Audits, and Training services make AI governance clear and practical. Contact us today.
Source
Gizmodo, article on the Signal founder’s concerns about AI and privacy.