Author: Gabriella Skoff

Quantum Computing

All Hype and No Game? Google, IBM, Preskill and Quantum Supremacy


No Comments

Feature image: IBM’s System One quantum computer, via The New York Times.

Words are important. The language we use to describe something creates a discourse around it, endowing it with a life of its own and often producing meaning beyond the definition of the words themselves. Like many expressions in the emerging technology space (think: disruptive innovation, IoT or even artificial intelligence) the expression ‘quantum supremacy’ has been over-used and misused to a point where the original relevance of the term has been buried under a mountain of hype. Has this expression been transformed into an empty buzzword through this process—like a game of ‘telephone’, its meaning distorted along the way? Now, with Google’s proclamation of achieving quantum supremacy officially published, criticism has been flooding in; not only with regard to the claim itself but more fundamentally, of the usefulness of ‘supremacy’ as a benchmark at all.

Just last week IBM, one of Google’s main competitors in the quantum space, posted a critique of Google’s shrouded announcement on their blog. The article, penned by Edwin Pednault, John Gunels and Jay Gambetta—leaders and members of IBM’s quantum research and development team—urges that Google’s achievement should not be misconstrued ‘as proof that quantum computers are ‘supreme’ over classical computers.’ The team takes issue with the way in which ‘quantum supremacy’ has come to imply that quantum computers will one day reign ‘supreme’, replacing classical computers. They assert that the future of quantum computing will be intertwined with classical computing, the two uniquely-suited systems will work in concert with one another.

IBM’s statement adds further complexity to the situation, citing that Google may not have even achieved quantum supremacy at all, according to John Preskill’s original definition of the term, ‘to describe the point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful.’ The point of contention: Google has stated that their Sycamore processor was able to compute the answer to a specific problem in about 200 seconds, a task they claim would take 10,000 years to compute on the world’s most powerful supercomputer. IBM argues, however, that the comparison does not account for any of the unique capabilities of classical computing, which if properly leveraged, could bring that number down to just 2.5 days. While that’s a significant difference in estimation between Google and IBM, worthy of headlines itself, it should be noted that neither team has actually tested the calculation on a supercomputer yet.

Whether or not the disparity in time to compute between quantum and classical is measured in days or years, the fact remains that what Google demonstrated on their quantum processor can also be done on a classical computer, albeit much more slowly. Still, as Preskill commented in a recent interview for Quanta Magazine on the subject, Google’s achievement is significant in that it demonstrates the team understands the hardware they have built and that it is working. When stripped of the hype associated with the terminology, this may seem a far more lacklustre news item. Nonetheless, it is an important step forward in the development of useful quantum computing applications, which is where the real fruits of the industry’s labour will finally be tasted, across areas as diverse as healthcare, hacking, and honing.

As Project Q and others have explained in coverage of Google’s announcement, the terminology used can be misleading at best and simply a product of the media hype machine at worst. But hype has its value too. Hype can insulate against the arrival of a quantum winter—a scenario where interest and investment in quantum technologies drop off due to the technology’s failure to deliver on its promise. There is concern that a quantum winter could mean the technology never reaches the immense promise of its applications. However, private funds continue to flood the industry. According to research by Nature, ‘in 2017 and 2018, companies received at least $450 million in private funding—more than four times the $104 million disclosed over the previous two years.’ While hype contributes to potentially overstating the promise of quantum computing in the first place, it also plays a critical role in pushing the development of quantum computing forward by keeping the buzz and enthusiasm high.

It’s important to recognize the role of hype in technological progress. It is also important, however, to have access to a more nuanced understanding of the progress of quantum development, to dive deeper than the terminology and the hype around it. Without a doubt, the terminology used to dictate a turning point in the development of quantum computing is problematic, even according to its creator. This was also IBM’s central point in their argument: ‘we urge the community to treat claims that, for the first time, a quantum computer did something that a classical computer cannot with a large dose of scepticism due to the complicated nature of benchmarking an appropriate metric.’ Quantum supremacy, while a significant step in quantum’s development, is by definition an incredibly narrow benchmark with practically no real-world utility. However, it has value in its ability to capture the imagination of society and keep people engaged in the progress of one of tomorrow’s most promising technologies.

Quantum Theory

Quantum Superposition Bridges the Classic World


No Comments

Cracks have begun to show in one of quantum’s biggest controversies. The well-known Schrödinger’s cat thought experiment, which sought to illustrate the absurdity of applying quantum theory to the macro-physical world ruled by classical physics has been challenged by a recent advancement in quantum physics. An international team, led by Markus Arndt of the University of Vienna, successfully placed a large molecule of 2,000 atoms—the biggest object yet—into a state of quantum superposition. The advancement shows that quantum effects can be translated into the classical world, establishing the foundations for scientists to continue to demonstrate how the gap between these seemingly disparate worlds might be reconciled.

Quantum theory tells us that particles in superposition can shift between a wave-like state and a particle-state, meaning they can be in two places at once. Of course, from what is observable in the classical world, this cannot be true. If it were, our understanding of what we understand to be “real” would be challenged, opening the door for a whole host of quantum weirdness that classical theory keeps at bay. Essentially, as Schrödinger tried to prove with his thought experiment, if quantum mechanics is reflected on a macro-physical scale, it signifies that human beings could also exist in two places at once. It does not take long for this information to snowball into theories of time travel and multiple worlds, both of which find basis in quantum theory.

On a fundamental level, the new work published in Nature illustrates that the multi-state paradox of quantum mechanics, known as superposition, functions on a larger scale than previously demonstrated. In theory, we already knew this to be true, but the experiment proves it at the largest scale yet, having only been demonstrated previously using the smallest possible particles; atoms, photons and electrons. The experiment used by Arndt and his team, essentially a souped-up the double slit experiment, has been used regularly since 1801 in quantum mechanical experiments to observe the effects of superposition.

The simple experiment involves particles of light (photons) beamed toward a barrier with two slits in it. On a screen behind the barrier, the effects of quantum superposition are displayed in the form of what is known as an interference pattern. It looks something like this:

This striped pattern that results is interesting, as one might assume that a single beam of photons would produce a representative pattern of a solitary line, indicating their fall along a single path. However, the striped pattern that is produced shows that all of the photon’s possible paths are taken and eventually interfere with each other, suggesting the particle in fact also acts as a wave. This describes the probabilistic nature of quantum phenomena, challenging Einstein’s famous claim that “God does not play dice with the universe”.

In order to pull their super-sized version of this experiment off, the international team had to create not only the perfect environment but also synthesized the massive molecule itself in order to ensure it met the requirements for complex quantum activity to occur. The team built a custom interferometer—which, as the name suggests, is a tool that works by merging two or more sources of light in order to create an interference pattern—called the Long-Baseline Universal Matter-Wave Interferometer (LUMI). The team’s LUMI also beats a record: it is the longest interferometer ever built, with a baseline length of 2 metres. Use of this specialised machine permitted the researchers to fire the beam of heavy molecules (some more than 25,000 times the mass of a hydrogen atom) at the multiple-slit apparatus and observe the resulting interference pattern, confirming the molecule’s state of superposition.

With records being broken in the quantum space with what feels like near-weekly regularity, this advancement especially marks a unique turning point in the disagreement between quantum mechanics and general relativity. These two frameworks we use to understand the world around us have come as close to being bridged as ever before. While the success of this experiment does serve to wedge the door open for a number of seemingly bizarre theories like time travel and multiple worlds, it is doubtful that human beings or planets will be time traveling through multiple realities any time soon, if ever. However, this new, scalable research pushes the limit that scientists seek in quantum interactions of superposition further along, enabling and encouraging future research to continue to explore these limits.

Quantum Applications, Quantum Computing

Transforming Drug Development: A Critical Role for Quantum Computing


No Comments

Feature image via Nature

With news of Google’s possible achievement of quantum supremacy, quantum computing’s promise in a diversity of fields grows ever-more tangible. Drug discovery is just one of a number of areas in which quantum computing is expected to play a disruptive role. On average, it takes over ten years and billions of dollars to bring a potentially life-saving new drug to market. Quantum computers promise to revolutionize the currently expensive, difficult and lengthy process of drug discovery and development, by expanding the search for new chemicals to treat some of the world’s most deadly diseases, speeding up the creation of new drugs and cutting the costs of their development. At this prospective turning point in the advancement of quantum computing, Project Q takes stock of quantum applications in drug research and development.

Currently, researchers rely on computer models and simulations (M&S) to analyse how atoms and molecules behave, in order to develop drugs that will have optimal positive effects and minimal harmful ones. However, while of critical value to this process, today’s M&S tools quickly reach their limits of utility in the complex and computationally intensive process of molecular simulation. The goal of molecular simulation is to find a compound’s most stable configuration, known as its ground state. In order to do this, researchers use M&S systems to simulate the interactions between each of that compound’s electrons, in each atom, in order to test how they will react to one another. This is a fairly straight-forward task, as long as the molecules being tested are simple enough. However, even today’s most powerful supercomputers are only capable of simulating molecules of up to a few hundred atoms, limiting their calculations to only a small fraction of all chemicals that exist.

For a whole host of larger molecules that could be used to make new, life-saving drugs, researchers currently have no better option than to approximate how a molecule may react and then test its behaviour in trials. This process is incredibly inefficient and about ninety percent of drugs that do reach clinical trials fail during the first phase. Adding to this complexity, M&S methods are unable to calculate the quantum interactions that contribute to determining the characteristics of a molecule. A technological update in drug discovery is long-overdue.

Ultimately, the main technological limitation facing drug research and development today is that classical computers lack efficacy in what is known as optimization problems—finding the best solution by testing all feasible solutions—a process which is incredibly time and energy intensive. Quantum computers, in theory, are extremely good at optimization problems. This is due to their ability to leverage parallel states of quantum superposition, which enables them to model all possible outcomes of a problem at once, including the quantum interactions that happen on a particle-level. Theoretically, as they reach their promised computational capacity, quantum computers should be able to rapidly process mass amounts of data.

In 2017, IBM Q researchers achieved the most complex molecular simulation ever modelled on a quantum computer, proving the potential use-value for quantum computers in the pharmaceutical industry. The research suggests that if applied to drug discovery, quantum computers could model and test new drugs through molecular simulation far more comprehensively and much quicker than classical computers, effectively slashing the costs of novel drug research and development. Aside from empowering researchers to discover new treatments for a range of diseases, quantum computing could also help bring new drugs to trial more quickly and improve the safety of trials.

Already, innovators and researchers working on quantum applications in drug development are making waves in the pharmaceutical industry. Abhinav Kandala, part of the IBM Q team that simulated the largest molecule on a quantum computer back in 2017, has continued to push the boundaries of quantum computing in order to make it more applicable to industry, faster. His work focuses on a major challenge in quantum computing: improving accuracy. Quantum computers are still drastically error-prone in their current stage, hampering their utility for application in drug discovery and development. One of the MIT Technology Review’s 35 Innovators Under 35, Kandala has demonstrated how quantum errors can actually be harnessed in order to boost accuracy in quantum computations, regardless of the number of qubits. General advancements in quantum computing like this could help to bring the benefits of quantum computing to industry sooner.

There are number of young companies emerging in the pharmaceutical research space, looking at the computational boost and projected accuracy that quantum computing could lend to a range of challenges in diagnostics, personalised medicine and treatments. As quantum computers are not yet advanced enough to stand alone, most of these global start-ups rely on a blend of emerging and classical technologies. Especially prominent is the blended technological approach in machine learning and quantum computing, a topic we have previously explored here.

Another of the MIT Technology Review’s 35 Innovators Under 35, Noor Shaker leads a company that is harnessing these two e(merging)-technologies in order to speed up the creation of new medicines. Her company, GTN LTD, is producing technology that layers the processing power of quantum computing with machine learning algorithms to sort through mass amounts of chemical data in search of new molecules that could be used in disease treatment and prevention. Using this method, GTN (a Syrian female-run company) hopes to build a critical bridge in healthcare that could help to lessen the gap in access and quality of healthcare for people living in developing countries. GTN LTD’s application of these two technologies is just one example of the numerous ways in which they could be used to create and spread benefit across global healthcare systems.

Machine learning projects are already being implemented as part of a growing trend in digital healthcare, providing a helpful starting point for discussion of how other emerging technologies like quantum computing could also impact the sector. A recent article in Nature explores how even the most well-meaning of artificial intelligence (AI) applications in healthcare can lead to harmful outcomes for certain vulnerable sectors of society. The examples investigated by author Linda Nordling demonstrate the need to apply a careful social-impact and sustainability methodology throughout the process. As Nordling explains, many machine learning-based projects in healthcare can reinforce inequalities rather than help to level the playing field, if equity is not a factor that is thoroughly considered and addressed throughout the entire research and development process.

Of course, every technology is different. The challenges confronting AI applications in the healthcare sector may not translate directly to the risks that quantum computing could pose. However, there are certainly lessons to be learned. For all emerging technologies, there is equal potential to help lessen the gap between the rich and the poor as there is to widen it. The direction of development toward the helpful or the harmful hinges on many factors, including accountability and regulation. Fundamentally, the incorporation of a methodological focus on equity and inclusion, from the inception to the employment of an emerging technology, is critical.

The application of quantum computing in drug discovery is no exception to this rule. The development of this emerging technology, both alone and in concert with other technologies, has the potential to make a significantly positive impact on society. With proper care taken to ensure ethical research, development and application, the trickle-down effects of the quantum revolution could improve the lives of many. It is thus imperative that we seek to understand the impacts that quantum computing could have in the pharmaceutical industry if we want to ensure its potential to help discover cures to intractable diseases like cancer and Alzheimer’s becomes a benefit that is distributed equitably across the globe. This is not a problem for quantum to solve, but for society.

Quantum Theory

Why Has Science Stopped Trying to Understand Quantum Theory?


1 Comment

Feature image via BBC Future

It is practically a truism that no one really understands quantum mechanics; yet quantum theory reigns as the dominant theory of the very small just as relativity does of the very large. This remains a paradox for a theory so fundamental that it provides the basis for our current understanding of phenomena such as atom decay and why stars shine, as well as how lasers and transistors work, which are embedded in just about everything. Physicists use and rely on quantum theory to predict the outcomes of experiments but they have stopped, as Dr Sean Carroll asserts in a recent op-ed for the New York Times, trying to understand how it works. While there are many contributing factors to this conundrum, a salient inhibitor is the siloed way in which we have come to think about the discipline of science and the way that the modern academic system reflects and perpetuates this way of thinking.

The barriers to understanding quantum begin with the fact that there are some truly sticky bits of theory that simply cannot be accounted for with our existing scientific frameworks. One such example is quantum’s measurement problem: a quantum waveform exists in all states of superposition until it is observed or measured, wherein it collapses into a single position. The fundamental challenge posed by this problem is that science supposes the existence of a measurable, objective world. The centrality of the role of the observer in quantum interactions defies this assumption by asserting that reality is observer-dependent and therefore non-fixed. This idea alone confronts science in a fundamental way, requiring an interpretation of reality that holds space for the “weird” and the “strange” of quantum mechanics—something that mathematics alone has not yet been able to provide.

This issue in quantum physics ignited a deep rift across the brightest minds of physics during the mid 1920s. Albert Einstein, representing the side of the argument which rejected the proposal that the quantum world could be characterized by probabilities rather than certainties, is famously quoted as claiming, “God does not play dice with the universe”. The interpretation of quantum mechanics that prevailed is the Copenhagen Interpretation, which asserts the rather less-than-satisfying conclusion that we simply cannot know more about quantum mechanics than what we can measure using equations. Understanding the theories was thus placed in the “too hard” basket.

Still, divergent theories from quantum’s inception into the 1950’s have attempted to make sense of this phenomenon. These theories had an undeniably philosophical bent and resulted in the majority of postulators being shunned from science altogether. In 1957 for example, Hugh Everett constructed a theory to account for quantum superposition with his Many Worlds Interpretation (MWI). Essentially, Everett’s MWI proposes that for every state of an atom’s superposition in a quantum interaction, that atom simultaneously takes each potential path, creating multiple, coexistent physical realities for each state. Mainstream physicists ridiculed Everett for what they considered to be a scientifically blasphemous postulation, a fact which no doubt contributed to his transition from science to defence analytics shortly after he submitted his dissertation.

Scientists’ resistance toward a multidisciplinary understanding of a scientific problem, however, is a relatively new phenomenon. For centuries, the disciplines of science and philosophy were taken in unity. In fact, the term ‘scientist’ was not even coined until the 19th century. Before that, great names such as Galileo and Newton considered themselves ‘natural philosophers’ rather than ‘scientists’. Even Einstein, Hiesenberg, Dirac and their cohort, the fathers of quantum mechanics, were schooled in European philosophy. This deep understanding of both the “soft” and “hard” sciences influenced their way of thinking, the types of questions they posed and ultimately the theories they constructed. As such, it enabled them to think beyond the boundaries of what was generally accepted information at that time and allowed them to construct new ideas that came to be known as fact.

However, since an epistemological transformation in the 1600-1700’s, which produced the distinction of “science” as the empirical investigation of phenomena, science and philosophy have become increasingly separate disciplines. While it has been a gradual process, this disciplinary divorce has become ingrained in society with the help of most knowledge institutions worldwide, culminating in the propagation of an isolationist understanding of these and other disciplines. This poses a significant challenge to the kind of fruitful multidisciplinary thinking that conceived nearly all of science’s greatest discoveries to date.

Beyond reifying the isolation of disciplines through course structures, universities also play a significant role in shaping academic discovery by prioritising certain areas of research over others. As Carroll elaborates:

“Few modern physics departments have researchers working to understand the foundations of quantum theory. On the contrary, students who demonstrate an interest in the topic are gently but firmly — maybe not so gently — steered away, sometimes with an admonishment to “Shut up and calculate!” Professors who become interested might see their grant money drying up, as their colleagues bemoan that they have lost interest in serious work.”

This situation is compounded by the fact that the metrics by which academic researchers are hired, retained and promoted has undergone a transformation over the last half-century. During this time, research culture has been impacted drastically by the dawn of the Internet, which has enabled an open and thriving, digital research economy. At the same time, an associated shift in focus towards metrics of productivity, quantified largely through research output, has become dominant across knowledge institutions. These changes frame the pervasive expectation that academic researchers should devote a majority of their time to publishing on certain topics and in certain journals in order to remain relevant and successful. Among other challenges, this focus on publication as a distinct metric of notoriety in the academic sciences has led many to game the system, with the resultant focus on quantity of output often detracting from the quality of output.

Aside from this, this phenomenon which has become known in academia as the “publish or perish” culture—that is, the pressure on academics to continuously publish work in order to sustain and further their career in academia—has left academic scientists with little spare time for creative thinking. This modern academic environment has been lamented by Peter Higgs, the scientist who discovered Higgs Boson, who doubts he could have achieved that breakthrough in today’s current academic system:

“It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964,” Higgs said. “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.”

Explorative and imaginative thought requires both ample time and space as well as the expectation that by nature of trying new things, it is likely the researcher will encounter far more twists, turns and dead-ends than solutions. While these qualities do not fit well into the “publish or perish” framework, it is well-established that they are of critical value to innovation. Discovery demands that we challenge the very prejudices that have become ingrained in our conceptual structures. In order to do this, one must have the freedom and encouragement to shatter these, rather than be required to work within systems that reinforces them.

Artificial Intelligence, Quantum International Relations, Quantum Research

India Races Toward Quantum Amid Kashmir Crisis


No Comments

Amid troubling news of serious human rights violations carried out in India-controlled Jammu and Kashmir—including a debilitating digital blockade lasting over two weeks—Indian Prime Minister Narendra Modi signed an agreement with France for a landmark technological collaboration in quantum and artificial intelligence (AI). The Indo-French collaboration between French company Atos and India’s Centre for Development of Advanced Computing (C-DAC) will establish a Quantum Computing Experience Centre at C-DAC’s headquarters in Pune, India and deliver an Atos Quantum Learning Machine. The high technology partnership, which “advocate[s] a vision of digital technologies that empowers citizens, reduces inequalities, and promotes sustainable development”, sits upon the controversial backdrop of India’s current actions in the Kashmir crisis and presents an interesting view into the intersection of international politics and quantum technologies.

During his first term, Narendra Modi began to position India as a global technology hub, putting its innovation sector on the map by embracing international investment and collaboration. The advancements that have been made over the last five years as a result of these activities have helped to fuel India’s socioeconomic development and cement its place on the global stage as a major emerging economy with a vibrant technology sector. Now in his second term, Modi seeks to apply a digital taxation to global technology giants like Google and Facebook on their activities in India. Though this policy shift has been identified as a potential barrier to Big Tech’s incentive to contribute to India’s start-up space, Modi has nevertheless continued to cultivate a tech-forward name for his government. His “New India” government focuses on sustainable development and emerging technologies, especially agricultural technology, AI and quantum.

Within this context, India’s national quantum technology research and development capacity has blossomed at a rapid pace, especially with regard to quantum mechanical theory and theoretical physics research and software development. However, unlike the top competitors in quantum computing such as China and the U.S., India lacks a strong quantum computing hardware industry, a challenge which could be exacerbated by Modi’s Big Tech taxation policy. In order to supplement research activities in its burgeoning quantum and AI sectors, Modi has instead turned toward collaboration with international governments as a vehicle to boost domestic technological development. For example, India’s recently established fund-to-fund partnership with Japan will support over 100 start-ups in AI and IoT. Likewise, the new Indo-French partnership is a critical piece of the puzzle for India, promising to help boost its national deficiency in applied quantum computing development and help India to become a leader in the quantum space.

With international partnerships playing such a key role in Modi’s plan for the development and growth of India’s quantum computing and AI industries, there is a sense that the country’s actions in state-controlled Jammu and Kashmir are damaging its international reputation. This perspective, however, is demonstrably negated by the signing of the Indo-French bilateral agreement. The agreement, which stipulates French alignment with India as a partner in sustainable development and emerging technologies, outlines the countries’ shared commitment to “an open, reliable, secure, stable and peaceful cyberspace”. It was signed into existence even as India, the world leader in internet shutdowns, enacted a digital lockdown on Kashmir for the 51st time in 2019 alone. This data sits in stark contrast to the stated objectives of the partnership and demonstrates the separation of business from peace-building priorities on an international scale.

The Kashmir conflict, a turbulent territorial dispute between India, Pakistan and China, dates back to the partition of 1947 and has already incited four wars between India and Pakistan. Kashmir, dubbed one of the world’s most militarized zones, is of strategic value to both countries and is India’s only Muslim-majority region. The recent conflict was spurred by a series of brutal attacks and rebellions since February 2019, which ultimately led the Modi government to revoke India-controlled Kashmir’s “special status” of autonomy granted under Article 370 of the Indian constitution. Given this complex history and characterization, India’s fresh assault on the region has led many (including Pakistan’s own Prime Minister) to fear an escalation of violence that could result in a worst-case-scenario nuclear face-off between India and Pakistan.

Whether or not it is representative of the true feelings of Modi’s “New India”, Indian national media has expressed nearly unequivocal supportive of the revocation of Article 370. French comments, however, lean toward neutrality—tactfully holding the situation at arm’s length while urging for a bilateral negotiation between India and Pakistan. Regardless of the two countries coming to a peaceful resolution or not, it appears that international investment in Indian quantum and AI development shall not waver in the face of the Kashmir conflict. Ironically, as India sprints to catch up in the quantum race with the support of France and other international allies, the results of the past technological nuclear arms “race” looms heavy over the continent.

Quantum Internet, Quantum Research

Quantum Teleportation: Paving the Way for a Quantum Internet


No Comments

Last week’s big quantum news centred on two proof of concept studies, both of which claim to have achieved quantum teleportation using a tripartite unit of quantum information called a qutrit, for the first time. While quantum teleportation has been demonstrated previously, it has only been carried out with qubits, which are capable of storing less information than qutrits but thought to be more stable. The novel feat was achieved independently by two teams, one led by Chinese physicist Guang-Can Guo at the University of Science and Technology of China (USTC) and the other, an international collaboration headed by Anton Zeilinger of the Austrian Academy of Sciences and Jian-Wei Pan of USTC. While both teams have reported their results in preprint articles, the article by the Austrian-led team has been accepted for publication in Physical Review Letters.

Competition for credit of this achievement aside, the team’s findings ultimately support each other in substantiating an advancement in quantum teleportation theory: namely, that quantum networks should be capable of carrying far more information with less interference than previously thought. This advancement—like many in the world of quantum—is likely to be found most exciting for physicists, evading the grasp of an applied significance for those of us with less scientific minds. Nevertheless, the notion of quantum teleportation has once again grabbed headlines and imaginations, providing a good opportunity to explore the concept and the applied significance that advancements like this might eventually have on our world.

While it may sound flash, quantum teleportation is an affair less akin to science fiction than one might imagine. On a basic level, quantum teleportation differs from ‘Star Trek teleportation’ because it is used to transmit information rather than macroscale physical objects, like human beings. This is possible because of quantum entanglement, a phenomenon of quantum physics that allows us to look at one particle or group of particles and know things about another, even if those particles are separated by vast distances. Quantum teleportation relies on entanglement to transfer information based on this shared state of being demonstrated by entangled particles. As such, quantum teleportation can be defined as “the instantaneous transfer of a state between particles separated by a long distance”.

Quantum teleportation holds the most obvious promise in the discipline of quantum communication, where its impact in secure communication was established as early as 1997. In 2017, Chinese scientists working with a team in Austria made waves with their announcement that they had achieved transnational quantum teleportation, establishing a quantum-secure connection for a video conference between the Chinese Academy of Sciences in Beijing and the Austrian Academy of Sciences in Vienna, some 7,600 kilometres away from each other. The experiment utilized China’s Micius satellite to transmit information securely using photons. Micius is a highly sensitive photon receiver, capable of detecting the quantum states of single photons fired from the ground. These photons, beamed via Micius, acted as qubits, allowing researchers in both countries to access a shared quantum key and thus enabling them to participate in the quantum-encrypted video call. Critically, should the data have been accessed by a third party, the code would be scrambled, leaving evidence of tampering for researchers at both ends of the connection.

This experiment, facilitated by quantum teleportation, proved two fundamental and impactful theories in quantum physics: that quantum communication can provide a previously unfathomable level of security and that it is capable of doing so on a global scale. Given these capabilities and coupled with the new qutrit proof-of-concept work, the realm of applied possibilities for quantum teleportation is expanding.

Aside from ultra-secure, transcontinental video conferences, one very hyped application for quantum teleportation is in the development of a hyper-fast quantum internet. Due to the entangled state of particles, information is transmitted instantaneously in quantum teleportation—faster than the speed of light. However, the transfer of this information is still required to operate within the current confines of classical communication. As such, even quantum information must travel through ground-based fibre optic cables or via photon-sensitive space-based satellites, like China’s Micius. This infrastructure is both expensive and potentially expansive, posing a formidable challenge for a global roll-out of a quantum internet. Still, these early experiments have laid the groundwork for the development of a quantum-secure Wi-Fi by putting theory to the test and producing promising results.

Currently, a team of researchers at Delft University in the Netherlands is working to build a quantum network, using quantum teleportation as the mode of transport for information between linkage points. The project, which aims to connect four cities in the Netherlands, is scheduled for completion in 2020. In China too, researchers are constructing the backbone for a quantum network to connect Beijing and Shanghai. Aside from the support of private corporations such as banks and other commercial entities, progress on the concept of both localised and international quantum networks has been spurned by pressing anxiety about global levels of cybersecurity

A critical advantage to a future quantum internet is the enhanced security afforded by quantum teleportation—the ability to create an unhackable connection. This could have serious implications for national security and could present a potential solution for many foreign surveillance and interference challenges that countries face today. For example, it is now public knowledge in the U.S. that Russia has the demonstrative ability to directly interfere with most paperless voting systems. While states are currently reticent about making changes to the current U.S. vote-casting system, alternatives are slowly being considered—from regressive paper ballot casting to progressive blockchain applications—in order to safeguard American votes against hacking efforts. Quantum teleportation could offer an interesting alternative in this space as the technology continues to develop.

Though quantum teleportation will not be transporting human beings between planets any time soon, it will play a key role in ushering in an internet revolution. While it remains to be seen exactly how that revolution will play out, it is clear that it will bring an unprecedented level of security and speed to global communications. It is also apparent that the level of interest in the secure and high-speed communications afforded through quantum teleportation is broad and deep, spanning both public and private sectors across the globe. Quantum teleportation has recently seen a number of experimental proofs, pushing the field of quantum communications to the fore of quantum development and promising to deliver a much sought-after security transformation within the decade.