Are we running out of time to save the internet? Image Credit: The Melting Watch, Salvador Dali, 1954.

Alexander Vipond

One of the most hyped topics in quantum computing is the potential for quantum computers to break the internet’s security protocols. It has been surmised that by running two theoretical algorithms on a quantum computer, Grover’s and Shor’s algorithms (designed by Lou Grover and Peter Shor in 1996 and 1994), one could break the cryptographic encryption standards of the internet.  Grover’s algorithm could be used to subvert HTTPS (Hypertext Transfer Protocol Secure) connections which authenticate websites as you browse the internet. Shor’s algorithm could break the RSA public key cryptosystem (named after Ron Rivest, Adi Shamir and Leonard Adleman) which secures everything from your online bankcard transactions, to email and phone calls.

However, this all requires a powerful quantum computer yet to be invented. How many qubits would be necessary to run these algorithms? What scientific breakthroughs are necessary? How long will it take to build one? Well the National Academy of Sciences in the US released a report titled “Quantum Computing: Progress and Prospects” which details not only the technical difficulties of racing to break the internet but the human challenges which lie in creating a secure post-quantum world.

The report presents two key findings.

One: Given the current state of quantum computing and recent rates of progress, it is highly unlikely that a quantum computer able to compromise RSA or any comparable discrete logarithmic public key cryptosystem will be built for a decade.

Two: Even if a quantum computer that can decrypt current cryptographic ciphers is more than decade off, the hazard of such a machine is high enough – and the time frame for transitioning to a new security protocol sufficiently long and uncertain – that the prioritisation of the development, standardisation and deployment of post quantum cryptography is critical for minimising the chance of a potential security and privacy disaster.

This demonstrates the severity of the risk that a powerful quantum computer poses despite the timeline towards its realisation.  

The National Institute of Standards and Technology (NIST) in the US has a post quantum cryptography project which held submissions for new post quantum cryptosystems last year, with 69 proposals passing the first round. NIST has proposed a timeline of 2022-2024 in which a new draft standard for the world will be created. This leaves only a few years to whittle down and test these cryptosystems to find a new standard.

The key issues are time and human cooperation. As Adi Shamir noted at the last RSA cryptography panel, transforming a new cryptosystem into a widely adopted standard takes about 15 years. For both RSA and Elliptic Curve cryptography this was the case. This is partially a function of the small size of the cryptography community, numbering only in the thousands of people globally. This makes it difficult to test multiple cryptosystems effectively and NIST only has three years to choose a successor standard for a post-quantum world. So, it is highly likely they will rely on older tested standards and increase their bit size, while new cryptosystems will take decades longer to be tested.

Newer cryptosystems may well benefit from this time lag as researchers will be able to gain an increasingly clearer view of what quantum computers are actually capable of and refine quantum resistant cryptosystems appropriately as the technologies develop in tandem. If the current transition is managed carefully, global standards developed and adequate resources provided for the switchover, it could be possible to move safely into a post-quantum world.

This does however rely on two large unknown variables. The first is the rate of scientific breakthroughs to complete a quantum computer capable of attacking strong encryption. The second is the intent of the actor who procures the capability. If breakthroughs are made faster than the global community can adopt new standards, countries will be left exposed. As this type of research is often conducted in secret, the global community may not have easily identifiable progress markers to measure against. The second variable is more pernicious. If a company reaches the milestone first, it is likely to announce its victory and is unlikely to undermine the internet infrastructure that secures its profits. However, if a country reaches the milestone first, it may wish to attack and steal data for geopolitical advantage or commercial gain, and the world may not know until the first attack occurs.

This puts the race to break the internet into perspective. It is a decade’s long systemic risk that intertwines both human and technical problems into a game that represents the apex of trust, security and privacy in the world’s most important communications system.