When Chinese hackers breached U.S. telecommunications surveillance systems, they didn’t need to create new vulnerabilities—they simply exploited ones that already existed.
The state-sponsored group Salt Typhoon had infiltrated the very backdoors U.S. law enforcement uses to monitor criminal activity, transforming tools of legitimate surveillance into weapons of foreign espionage.
This breach exposed an uncomfortable truth: there’s no such thing as a secure backdoor.
Any vulnerability created for “good guys” can—and eventually will—be exploited by adversaries.
Interestingly, this incident arrives at a critical moment in the global encryption debate.
Today, nearly a third of the world’s population—some 2.5 billion people—rely on WhatsApp’s end-to-end encrypted messaging.
Another billion use Apple’s iMessage, with Facebook Messenger recently joining the encrypted ranks.
The widespread adoption of robust encryption standards has triggered growing opposition from law enforcement and government agencies worldwide, as they grapple with a real dilemma: as criminal activities shift online, investigators are increasingly unable to access vital evidence in cases involving minors and drugs.
One proposed solution they’ve floated is to build “secure” access points within encrypted systems.
The European Union’s Chat Control legislation, for instance, would require messaging apps to scan content for illegal material before it is encrypted.
British authorities, through the Online Safety Act, are pursuing similar capabilities.
“I’m not certain that we need to go all the way to having every platform that children use in their homes and bedrooms having a similar level of weapons-grade encryption. Why does a 13-year-old need that level of encryption?” argues Rick Jones of Britain’s National Crime Agency, speaking to The Economist.
Even privacy-conscious companies have wrestled with this pressure.
Apple, for instance, briefly considered implementing device-based content scanning before backing down in the face of intense criticism.
This ongoing battle between privacy and regulation plays out differently across platforms.
Telegram, for example, has long maintained a hardline stance against government cooperation.
However, following the arrest of CEO Pavel Durov in France over the platform’s alleged failure to assist in child exploitation investigations, Telegram recently shifted its approach.
The company’s new privacy policy now permits sharing users’ phone numbers and IP addresses with law enforcement, but only with valid court orders in criminal cases.
This move towards cooperation reflects a broader industry shift, though with important distinctions.
Even platforms committed to strong encryption, like WhatsApp, share metadata with law enforcement in response to valid legal requests, but cannot access message content due to end-to-end encryption.
Telegram, by contrast, isn’t a primarily secure messaging app, but a social media platform where messages are not encrypted by default.
Users must explicitly enable “secret chats” for encryption, and even then, Telegram uses its own MTProto protocol, rather than the industry-standard Signal encryption.
It raises the question of whether Telegram staff could access and share user messages with authorities under pressure.
The implications of trying to create backdoors for “the good guys” extend far beyond individual privacy.
Client-side scanning technology, they say, would be designed to detect specific illegal content, but the same system could be repurposed to flag political dissent, religious content, or trade secrets.
We’re not just looking at hypotheticals here, authoritarian governments already demand access to communication systems for surveillance.
Once the infrastructure exists, history indicates that it will be exploited.
In a 2021 paper, “Bugs in Our Pockets,” fourteen leading cryptographers detailed why the idea of client-side scanning is “unworkable at best and dangerous at worst.”
They warned that embedding surveillance capabilities in every device would create a system of universal monitoring where the mere possibility of observation reshapes behaviour, stifling innovation, free expression, and democratic discourse.
The Salt Typhoon incident serves as a reminder: backdoors intended for the good guys inevitably become tools for the bad.
No matter how carefully designed or legally restricted, a security vulnerability is a vulnerability nonetheless.
It’s a fundamental principle of engineering that systems fail at their weakest points—and intentionally building weak points into our digital infrastructure invites disaster.
As former GCHQ official Ciaran Martin concluded after years of searching for technical compromises, “If no technical compromise can be found, then security must win and end-to-end encryption must continue and expand, legally unfettered, for the betterment of our digital homeland.”