As a parent, I know how hard it is to keep children safe online. Even with filters, parental controls, and endless conversations about responsible use, it is difficult to monitor everything. The internet remains a gateway to knowledge, creativity and community, but also to spaces and content that can be deeply harmful to young minds. The tension between freedom and protection sits at the heart of today’s debate on online safety and age verification.

This year, the UK took a major step with the Online Safety Act (OSA), designed to make the internet a safer place for all. A key part of the act focuses on restricting access to adult or otherwise harmful content, requiring certain platforms to introduce age verification technologies. On paper, the idea seems simple: prevent minors from seeing what they should not. In practice, it raises complex questions about privacy, security and digital trust, issues that go well beyond the UK’s borders.
The risks of exposing personal data
At its core, age verification requires proof – proof of who we are and how old we are. But proof means data. Whether a photo of a passport, a driver’s licence or even a facial scan, this information must be collected, processed and stored somewhere. And wherever valuable personal data exists, so too does the risk of it being lost, stolen or misused.
We have already seen what can happen when digital identities are mishandled. Supply chain attacks such as Magecart, which targeted popular e-commerce websites, showed how cybercriminals can intercept sensitive information as it is being entered. It is not difficult to imagine similar tactics aimed at platforms holding identity documents or biometric data.
Regulations like the UK GDPR set out clear rules on how such data should be handled. But even the strongest compliance framework cannot eliminate all risk. For parents, that is a sobering thought, the idea that protecting our children from one danger could inadvertently expose them to another.
Privacy by design must lead the way
The Information Commissioner’s Office (ICO) and Ofcom have both made it clear that platforms must confirm age “without collecting or storing personal data, unless absolutely necessary.” That principle, known as privacy by design, must be the foundation of any effective solution.
It is reassuring to see strong regulatory oversight, but compliance alone will not guarantee public confidence. Many parents are understandably wary of sharing identification data with private companies they may know little about, particularly when those companies operate globally. For instance, a provider headquartered outside the UK might be subject to foreign data laws, potentially creating conflicts over privacy rights. This underlines why digital trust – the assurance that systems work securely and transparently – matters just as much as the technology itself.
Avoiding a new kind of surveillance
If you liked this content…
There is also the question of how verification works in practice. If platforms seek to confirm that credentials are genuine and not borrowed, they may be tempted to cross reference data such as location or device information. But those measures risk crossing the line from verification into surveillance. Every additional check increases the amount of data gathered, and with it, the potential for misuse.
Alternative approaches, such as using existing trusted intermediaries like banks or mobile network providers, sound promising in theory. These organisations already verify identity for other services, so reusing those credentials might seem efficient. Yet the more those identifiers are shared and reused, the more they begin to form a digital trail, linking a person’s activity across platforms. What begins as age verification could easily turn into a broader form of digital tracking.
The need for smarter, privacy preserving solutions
Other countries have faced similar dilemmas. When Australia explored age verification legislation earlier this year, the same concerns surfaced: how to verify responsibly without building vast repositories of personal data. The challenge is universal because you need to carefully balance protection with privacy, accountability with autonomy.
Technology can help. Solutions based on verifiable credentials and digital identity wallets, aligned with emerging frameworks such as eIDAS 2.0 in the EU, offer a potential way forward. These systems allow individuals to prove attributes like their age without disclosing unnecessary personal details. They place control back in the hands of the user, which is essential for building long term trust.
This is the kind of innovation we should be encouraging: privacy preserving, interoperable and user centric. It will require collaboration between government, regulators, technology providers and the wider digital trust community. But above all, it needs sustained commitment and investment to make it both scalable and secure.
Building a safer digital environment for all
As a parent, I want the same thing every parent does, for my children to explore the internet safely and confidently. I do not want them to be shielded from the digital world; I want them to experience it responsibly, supported by the right safeguards. The Online Safety Act is a step in that direction, but it cannot stand alone. Age verification, done poorly, risks undermining both privacy and trust. Done well, it could set a new benchmark for responsible digital citizenship.
This is not just a question of technology or compliance; it is about our collective responsibility to get this right. That means learning from past mistakes, adopting best practices and ensuring that privacy is never treated as an afterthought.
Protecting children online must not come at the expense of their, or anyone’s, digital rights. The answer lies in building systems that respect both safety and privacy, empowering users rather than exposing them. That is the kind of internet I want my children to grow up in, one that is built on trust.







