When both the left and the right are united in their disdain for a piece of legislation, you know there must be something up. So, enter the Online Safety Act: a law so riddled with corruption, incompetence, and authoritarianism that you’d be forgiven for agreeing with Nigel Farage. But exactly what are the problems with this Tory written, Labour-allowed piece of legislation?
A broken promise: does the Online Safety Act actually protect children?
While the Online Safety Act was sold as a child‑safety milestone, critics argue it’s structurally incapable of delivering that outcome. Campaigners from organisations including Barnardo’s, the Molly Rose Foundation and CARE UK warn that loopholes around algorithmic recommendations, autoplay, live‑streaming, and age verification mean the legislation “will not bring about the changes that children need and deserve”. Rather than curtail harmful exposure, the law risks becoming symbolic rather than effective.
Since enforcement began on 25 July, age verification—via ID scans, facial estimation, or mobile verification—has triggered over five million age checks per day, mostly on porn sites. But this in turn has driven a rapid surge in VPN downloads as users seek to bypass access controls, shifting minors toward less‑regulated parts of the internet and raising their exposure to greater harms rather than reducing it.
The privacy trade‑off: undermining encryption and surveillance risks
Clause 122 of the Online Safety Act grants Ofcom the power to compel providers—including end‑to‑end encrypted messaging apps—to scan user communications for child sexual abuse material (CSAM). Experts warn this effectively undermines encryption. Former UK cyber‑security chief Ciaran Martin accused the government of “magical thinking,” arguing scanning cannot be done without weakening privacy protections and inviting mass surveillance. Similarly, Alan Woodward at Surrey University cautioned scan powers could expand into broader mission creep.
Article 19 described the Act as “extremely complex and incoherent,” while Open Rights Group labeled it a “censor’s charter”. Moreover, the Secretary of State has sweeping power to direct Ofcom’s codes of practice, raising alarm over government encroachment on independent regulation and free speech rights.
Though the Act ultimately dropped requirements to remove lawful but harmful speech, platforms must still engineer avoidance systems and defaults that limit exposure. Critics argue this effectively censors legal content—particularly controversial or political commentary—blurring free speech boundaries. In a climate where any criticism of Israel or Zionism is labelled ‘antisemitism’ – it is easy to see who the first victims will be.
High-profile critics such as Rebecca MacKinnon of Wikimedia and Wikimedia UK assert that criminal penalties for non-compliance might also suppress public‑interest websites like Wikipedia, potentially chilling open discourse.
Handing more power to big tech
Small and non‑commercial communities have already struggled under compliance costs. Sites like Microcosm and London Fixed Gear & Single Speed shut down or blocked UK users rather than face extensive law‑compliance burdens. Critics say such outcomes shrink spaces for non-commercial speech and reduce the webs’ pluralism.
Porn sites have also hit back at the Online Safety Act. PornBiz (the owner of xvideos.com) wrote a scathing editorial in which it said age verification:
enforcement is meant to be such a burden on porn companies that it destroys them.
For merchant sites, AV could be handled relatively easily by banks and credit card companies — they already know who’s over 18. But of course not. Instead, regulators dump the burden onto the sites themselves — the ones with zero identity data, and outnumbering banks by a lot, multiplying the risks related to implementation. The logic? Force those least capable of verifying age to do it anyway — then blame them when it fails.
What’s going on is obvious: “protecting children” is a false pretense. AV is being used to attack porn and those who watch it. It was never about children. It was always driven by anti-porn crusaders and control-obsessed ideologues.
These same people pretend to stand on a moral high ground, while lying through their teeth about their true intentions.Citing “child protection” is an effective tactic to silence critics. If you oppose AV — even for sound, technical reasons — you’re immediately hit with emotional blackmail and cheap outrage.
This was written before the Peter Kyle/Nigel Farage furore – and so has since come to fruition.
Moreover, the Act will hand more power and control to big tech companies. As the New Statesman reported:
“This is not good news,” said the owner of The Hamster Forum, “Home Of All Things Hamstery,” back in March. “I would probably need a lawyer and team of experts to be able to fully comply with everything… I am going to have to close the forum… I’m suggesting everyone joins Instagram and follows our account on Instagram instead.”
The forum’s owner was quoted £2400 a year to use an external age-verification service in compliance with the Act. This is a big chunk of the average part-time webmaster’s income, but nothing to corporate social media executives…
The only websites with the financial capacity to work around the government’s new regulations are the ones causing the problems in the first place.
Meanwhile, platforms chose to age‑gate entire support or niche forums rather than risk technical or regulatory breach, depriving vulnerable users of peer support and community.
Weak engagement with harm: evidence gap and poor tool uptake
An Institute of Economic Affairs analysis found the Online Safety Act lacked evidence linking online harms to the proposed remedies, raising questions about whether the measures were grounded in real‑world data or proportional to actual risk.
Academic research also shows that while over 80% of UK adults used post‑hoc tools like reporting, satisfaction was generally low and preventive safety features were often poorly understood or under‑utilised—particularly by lower digital‑literacy users. If existing toolsets struggle in real life, mandatory systems imposed by Ofcom may prove ineffective or ignored by users.
Michael Hobbs of isAI Tech points to alarming privacy risks, noting vague guidance from Ofcom has allowed intrusive implementations such as document upload or facial recognition. Once collected, sensitive identity data becomes part of the UK’s digital “attack surface”—at risk of breach, misuse, or improper retention.
Drivel from Peter Kyle
Technology Secretary Kyle has denounced the Online Safety Act’s opponents—claiming they favor “predators” or “extreme pornographers”—while critics say that sensational rhetoric sidesteps substantive discussion of the legislation’s flaws. Reform UK leader Nigel Farage dubbed the Act “dystopian,” framing it as authoritarian overreach that mandates censorship and surveillance measures disguised as child protection.
Of course, as the Canary recently reported, Kyle is all-too friendly with big tech. He received £66,000 from tech company Public Digital in 2024, as reported by The Stark Naked Brief. This was in staff costs.
Later on, in July this year, Peter Kyle issued a £5m government contract to the tech company, the same one that gave him tens of thousands.
On top of that, Kyle’s office appointed Public Digital employee Emily Middleton to government, on a public salary of over £128,000 per year – the same person Public Digital gifted to him in the first place.
There’s more. Faculty AI handed the science minister £36,000 in May 2024. Then, in February this year, his department gave that very company a £2.3 million contract.
Historical parallels: illusion of protection from the Online Safety Act
Analysts compare the Online Safety Act to past UK initiatives like ContactPoint—a massive child‑data register that critics said created the illusion of safeguarding, while in practice undermining privacy and failing to deliver better protection. The fear is that OSA—despite its fanfare—may reproduce that illusion, substituting bulk regulatory signalling for meaningful, context‑driven child protection strategies.
In theory, the Online Safety Act claims to make the UK the “safest place to be online”.
In practice, the Act will erect weak age barriers, erode privacy and encryption, burden smaller communities with costly compliance, and push users—especially minors—toward less regulated areas of the web. Experts across academia, non‑profit advocacy, civil liberties groups, and parts of the tech industry argue that the Act is a censor’s charter, a surveillance risk, and a costly burden—all while offering limited real protection.
If the UK genuinely wants to protect children online, critics contend that investments in digital literacy for young people and caregivers, contextual content moderation, innovative UI safety features, and community-led support ecosystems would be more effective than sweeping, one-size-fits-all regulation wielding fines and scan mandates.
Of course, the government knows all this – which is exactly why it has allowed this legislation to run its course. Labour has become an authoritarian horror – and the Online Safety Act, despite not even being the party’s law, is perfect for its own agenda.
Featured image via the Canary