Author: Gabriella Skoff

Artificial Intelligence, Quantum Applications, Quantum Computing

Hand-washing, social distancing and quantum computers? Lessons for tomorrow from today’s global pandemic


No Comments

Feature image via NPR

Lately, I have begun to wonder how many times the word “unprecedented” can be used in the news before it loses meaning altogether. As society is inundated with stories, numbers, and graphs, detailing the spread and mortality rate of the novel COVID-19, it is important to remember that parallels can be drawn between previous global pandemics and lessons learned for the challenges we are now facing. In fact, health pandemics occur at a disturbingly regular rate. Most recently in 2013, Ebola affected large swathes of Africa, in the late 80s to early 90s the ongoing AIDS epidemic hit the U.S. hard, and if we look even a little further back in history, the Spanish Flu of 1918 presents a similar case study to Coronavirus. Further back still, the Bubonic Plague wiped out between 30-60% of human society in the 1300s, and even biblical times tell stories of widespread plagues. Similarities exist as well between our current state (and our predicted future state) and war-times—when large-scale, global, social and economic impacts have closed shutters, necessitated isolation, and strained resources and economies across the globe.

While a pandemic of this scale or severity may be nothing new, there is one important, unprecedented factor we are experiencing with COVID-19 today: The role that modern technology is playing in the fight against it. Never before has the world been equipped with the level of advanced technology currently at our disposal. With two of the world’s technological giants, China and the U.S., epicentres of COVID-19’s infection and spread, it is interesting to see the ways in which technologies are being used to combat the spread of the virus, diagnose cases, control information, and develop treatments. While quantum technology is still too immature to contribute significantly to the current effort, there are signs pointing to how useful it will one day be in a similar scenario, which we can be sure will eventually arise again. This moment presents a stark opportunity for emerging technologies to take the stage and for the humans behind them to demonstrate how they can be used for the good of society.

China presents a COVID-19 case study characterised by stark contrasts. The country has been on the receiving end of global vitriol and applause within the span of a week for its handling of the situation. Its approach to managing the outbreak has been characterised as draconian, a serious infringement on its population’s civil liberties and human rights—but apparently, it worked. The numbers coming out of China indicate that the country has achieved something few others have yet been able to: a dramatic decrease in cases. While nearly everywhere else, cases continue to skyrocket. It appears China has gained an upper hand over the situation, a battle in which a slew of emerging technologies quite literally held the frontline. Notable among these are China’s use of robotics, drones, Big Data and artificial intelligence (AI).

China’s supercomputer Tianhe-1 has also been put to the job. Using an AI system to compare hundreds of chest scans of potentially infected patients, Tianhe-1 was tested as an alternative to the scarce COVID-19 test kits in China. The results produced by the supercomputer are reportedly ready in seconds—not days—allowing for rapid diagnoses and enabling quicker action to be taken to isolate the ill. This is a function of critical value in the case of a virus where many otherwise “healthy” people act as carriers, manifesting little to no symptoms but remaining highly contagious. According to the source, this technology has a success rate higher than 80% and is able to assist doctors in coping in an inundated healthcare system by doing what a doctor can do in 15 minutes, in just 10 seconds. This system is now employed in over 30 hospitals across China, where supercomputing power combined with an AI system and access to vast amounts of data are empowering it to improve every day.

In the U.S. as well, the most powerful supercomputer in the world—IBM-built, 200 petaflop, Summit—is being employed to help researchers find a treatment for Coronavirus. In February, researchers used Summit to screen a library of 8,000 drug compounds already approved for human use, searching for those most likely to be effective against COVID-19 by simulating reactions with the virus’ own component structure. As with China’s Tianhe-1, Summit was used by researchers to complete a job in two days that would have otherwise taken the team months to complete using classical computers. Over two days, Summit identified 77 small-molecule drug compounds for further research and trial from the original list of 8,000. While Summit is not being used to discover a new drug to cure Coronavirus, it is hoped that this system can speed up trials for the use of existing, approved drugs to combat COVID-19.

These initiatives, empowered by humans but powered by modern supercomputing technology, endow us with the critical component of exponential speed, giving the world’s researchers precious time to diagnose and potentially treat infected patients before healthcare systems become overwhelmed. Both the Chinese system powered by Tianhe-1 and IBM’s Clinical Development system have been made available free of charge during this time. These systems are of critical support in the processes of enabling exponentially quicker diagnoses and anti-viral identification, however they cannot immediately provide what is most sought after: a cure.

Novel drug development is a time-consuming and expensive process, often requiring up to 10 years and billions of dollars. Project Q has previously investigated how quantum computing can be a critical tool in speeding this process up and making it cheaper. With news of the role supercomputers are taking in diagnosing and matching of an existing treatment for COVID-19, it becomes ever-more apparent that quantum computers will be of great value in aiding researchers to discover new drugs to treat and protect against future viral outbreaks. As supercomputers dwarf classical computers in processing power, so too will quantum computers provide an exponential edge over supercomputers to rapidly process mass amounts of information.

Quantum computers will be able to speed up the process of drug development by providing researchers with the ability to compare the combination of any number of compounds, much like what has been done with Summit—but much faster. Unlike Summit, which is bound by its ability to only model a certain level of molecular complexity, quantum computing with be able to simulate a greater array of potentially life-saving compounds. Further, according to early experimental results from recent research by IBM, quantum computing will be able to do so with far greater speed and accuracy than currently possible.

Paired with deep learning AI technology, quantum computing could also help to accurately model exactly how an infectious disease like COVID-19 spreads. This capability requires vast amounts of data, which is something the U.S. government is currently calling on social media companies like Facebook to provide. During this pandemic, it has become clear how valuable and important accurate modelling of the virus’ spread has been to governments and health officials for creating visual representations of the “what ifs”. These models help us to understand not only projections for transmission, but also to simulate and demonstrate how well certain interventions might work. Machine learning algorithms paired with quantum computing power could eventually provide us with more accurate models, helping to both simulate various interventions and help society to understand our own role in prevention, as authorities have tried to communicate through the “flattening the curve” of COVID-19 discourse.

We are learning that in the case of COVID-19, time is of the essence. The spread of the virus is unique in its ability to manifest as asymptomatic in many—creating perfect carriers in a highly globalised, mobile and crowded society. Specialists such as Michael Osterholm, an infectious disease expert, renowned virologist Robert Webster, Jeremy Konyndyk, the former director of USAID’s Office of U.S. Foreign Disaster Assistance, and even Bill Gates have warned society of the next global health pandemic for years. And yet, as is the case with global warming, preparing resources, the public, and responders for a future assailant (no matter how deadly) has never risen to the top of any government’s list of priorities. Now facing this reality, governments are finding their hospitals poorly equipped, society failing to respond to emergency directives, and frontline workers preparing for disaster.

Modern computing technologies are helping to quicken the pace of human ability to cope with COVID-19 and ultimately empowering researchers to beat it in innovative ways. One example of this is the new public-private AI research consortium, c3.ai, which has vowed to put its mission—combining top researchers with a focus on using AI to battle insidious social problem— to the job of combatting the spread of Coronavirus. Social networks and platforms, including Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube have a joint statement on their efforts to combat the spread of misinformation about the pandemic. While these companies already have machine learning algorithms in place to moderate feeds, Facebook, Twitter and Youtube in particular have stated that they will be relying heavily on their AI algorithms at this time, having sent workers home in order to help keep them safe. This moment ostensibly defines a test for imperfect AI systems, where they will be held almost entirely responsible for making human (or, it is hoped, superhuman) judgement calls in moderating information at such a critical time.

There is certainly ample hope that these emerging technologies can be applied to the effect of great social gain. However, we should always be weary that letting modern technologies run at the lead of progress may come at a heavy and unretractable cost to society. China has applied the widespread use of high-surveillance technologies, including a color-coded health-risk rating system that tracks millions of citizens daily and controls the spaces they are allowed into, and hovering drones equipped with facial recognition technology. These technologies are used in a society where most citizens already accept that their technological benefits come with an increased level of surveillance and high levels of government control. However, in the U.S. and elsewhere, COVID-19 has brought this debate to the table. Emergency discussions between America’s Big Tech and the government are underway, where anonymized personal data held by tech giants like Google and Facebook are being requested to help health officials to track and map the spread of infection.

As ever, the line between technology as a help or a harm is complicated and blurred. The case of its use in the battle against COVID-19 is no exception. However, we have the opportunity to look at the choices we face in applying these technologies today in order to inform how we make our decisions tomorrow. If anything, the Coronavirus pandemic can show us that while history might not repeat itself, it does produce both positive and negative feedback loops, leaving us with the ability to make choices as a society that allow us to move forward rather than in circles. It is worth pointing out as a stark reminder of our own humanity during this epidemic, the fact that the most effective tools put forward against combatting the virus are incredibly low-tech, to say the least: hand-washing and social distancing. So, for the moment—we should wash our hands, practice social distancing, be kind to others and, when the time comes, make the right choices about how we might best apply quantum computers for the public good.

Quantum Applications, Quantum International Relations, Quantum Internet

The quantum internet should be space-based—or should it?


1 Comment

Feature image via Vice

Satellites have become critical tools in infrastructure and defence. They control GPS systems, enable international communications, allow us to watch the news “live” and track and relay information about the weather and other natural events. They facilitate business and financial communications, as well as radio and telephone capabilities. Satellites have become utterly vital to state infrastructures, making them fundamental assets for competing global powers. Now, we learn that new satellite constellations are being presented as the best way forward to empower a global quantum internet. As the use-value of satellites broadens and we become ever-more dependent on the networks and systems they support, two critical threats loom large. One, the physical threat of space debris; and two, the threat posed by the increasing militarization of space. These challenges to the implementation of a space-based quantum internet have not yet surfaced in the developing debate but must be addressed as we stand on the brink of the quantum age.

New research conducted by a Louisiana State University team led by Sumeet Khatri suggests that satellite-based technology is the best way forward to build a global quantum internet. According to the researchers, a quantum-enabled satellite constellation would be the most cost-effective approach to realise the next big application in quantum communications. Khatri’s team suggests that the most effective and logistically coherent system for a space-based quantum internet would require a constellation of at least 400 satellites, circling the globe in mid-Earth orbit, at an altitude of around 3,000 kilometres. On its own, 400 may not seem like a huge number of satellites, however, by way of comparison, GPS only needs 24 satellites to operate effectively. The nature of quantum entanglement—the essential property of non-locality utilized in quantum-satellite communications—is incredibly fragile, thus requiring a relatively high number of satellites and base stations to allow quantum information to travel without loss of signal.

Space Debris

Currently, there are around 2,000 active and 3,000 non-operational satellites orbiting Earth. Aside from these, National Geographic reports that there could be up to 500,000 pieces of what is known as space debris—man-made bits and pieces separated from rockets, space stations and satellites or simply left behind in space—littering Earth’s orbit. Space debris can vary in size, from a fleck of paint to an entire defunct satellite. It does not float idly through space, but rather, travels at a speed of about 17,500 miles per hour (approximately 28,163 km/h). At such great speeds, even a piece of debris as small as a pebble could cause serious damage in a collision with other Earth-orbiting objects such as space stations or satellites. As such, these collisions not only pose a risk to astronauts and space stations (powerfully depicted in the 2013 film, Gravity) but also to critical satellite-based communications infrastructures.

The challenge posed by space debris has not only made its way in popular culture but is heavily monitored by NASA, as a satellite and space-mission security issue. Today, NASA and the U.S. Department of Defence use ground-based telescopes and laser radars to monitor and report on the locations of more than 1,700 pieces of space debris in order to help prevent collisions with operating spacecraft and satellites. These efforts have so far proven sufficient—only a few collisions have occurred that have caused considerable damage to either spacecraft or satellites, but the potentiality for collision events is becoming increasingly common. At the same time, plans to launch more and more satellites are announced regularly by both state and non-state actors. While the situation is currently manageable, a predicted influx of over 50,000 satellites in orbit over the next decade would certainly tip the scales. A satellite-based quantum internet would, of course, add to this crowded milieu.

As a now poignant 1998 article, The Danger of Space Junk, for The Atlantic warned: “over time everything in Earth’s orbit will be ground into celestial scrap”, creating “a mausoleum of space technology”. Scientists now warn that if we do not manage existing space debris and ensure that future satellites and spacecraft are fitted with de-orbiting mechanisms, this reality will soon come to fruition. Most space-bound objects have no built-in function for de-orbiting and will continue to float (or rather, zoom) through the congested low- and mid-Earth orbit as they break into smaller and smaller fractions through degradation or collision with other orbiting objects. Each collision, no matter how small, exponentially compounds the problem.

This problem, which we are now beginning to witness, is known as the Kessler Syndrome. The eponymous Kessler Syndrome was posed by NASA’s Donald Kessler in 1978 in a co-authored quantitative study on the issue. The theory argues that the continued launching of satellites without a plan for de-orbit will lead to exponential collision frequency, creating a “debris belt” in low-Earth orbit that could render future space exploration and the use of satellites impossibly risky, creating a huge setback that could last generations. This leads to a concerning prognosis for the maintenance of entire space-borne infrastructures, which, among other critical functions, transmit national secrets and protect society from incoming natural and man-made disasters like hurricanes and missiles.

There are a variety of niche innovations underway that aim to confront this encroaching challenge, including Japan’s giant space whip (known as the electrodynamic tether, or EDT), which intends to swat debris out of earth’s orbit, causing it to incinerate as it falls toward Earth. The most effective technology for the job, however, is for future satellites to be built with a functionality to end their own lives once their tasks are complete, using their last bit of power to head back toward Earth where they will burn up in the atmosphere in order to self-decommission. This is currently a rapidly evolving space where new innovations are being applied and tested regularly. Projects like D-Orbit’s purpose-built, de-commissioning cubesat and the World Economic Forum’s 2019 project to create a space sustainability rating look hopeful. These types of conscious industry advances are necessary in order to ensure we avoid the Kessler Syndrome, so we can continue to use space sustainably to host novel satellite applications like a quantum internet.

Security

As we have reported on previously, space itself is no sanctuary from geopolitical rivalries. The implementation of a space-based global quantum internet will present a challenge for the grey area of international space development. Quantum satellites straddle the fine line between non-militarised and militarised infrastructure. Quantum technologies are heavily invested in by military-state apparatus—especially in China and the U.S. For either of these countries, the large-scale deployment of quantum-satellites could push us over that line and into an uncertain future of a highly militarised outer space. Already, U.S. President Donald Trump has initiated the development of a dedicated space arm in the U.S. defence forces with Space Force. In China too, the space and military programs are the same entity. Satellites are the centrepieces in both U.S. and Chinese space-security programs, both in offensive and defensive capacities. Recently, Russia, France and Norway have also invested heavily in satellites for a variety of security motivations.

While the conversation around space-military fusions sounds like the stuff of futurist, sci-fi fiction, it is very much a real and unfolding topic in the meta-geopolitical debate. Meta-geopolitics, untethered from traditional geographic constraints, refers to a new phase of international relations contextualised by the rise of border-defying security threats, like terrorism, cyber warfare and espionage, and global warming. It also extends to outer space, where our ever-growing dependence on satellite-based infrastructure is at increasing risk of interference and jamming by state or non-state actors.

China has been a demonstrated leader in both satellite-based quantum communications and offensive security since 2007, when the country tested its anti-satellite missile—a move that thrust satellites to the top of military agendas, especially in the U.S. In 2016, China launched Micius, the first quantum satellite that would soon facilitate ground-to-space quantum-secure communications across the globe. Since then, as we heard at last year’s Q Symposium from Jingyun Fan of China’s University of Science and Technology (watch his presentation here, from 1:10), China has been hard at work, refining the quantum communications capabilities of Micius. Aside from China and Europe, satellite-enabled quantum communications efforts are also underway in North America and the Indo-Pacific, including in Australia. The development of this new wave of satellite technology is only just beginning in earnest and promises to see more and more purpose-built quantum-satellites launched into earth’s low and middle-orbit in the coming years.

From a security perspective, achieving global quantum communications has long been a target, as it promises to enable hack-proof security for long-distance information transmission. While the achievements have so far been narrow, a space-based quantum internet is the next step in ensuring the tamper-proof transmission of vital information across the globe. It is easy to understand the benefit of these capabilities to any national or allied security apparatus. It is equally clear to see how the targeted destruction of quantum-satellites could become an immensely effective tactic in war. Enter space as a new “operational domain” of war, as recently declared by NATO in November of 2019. The inclusion of space as an operational domain acknowledges both the alliance’s critical reliance on satellite infrastructure and the growing threat posed by anti-satellite weaponry capabilities.

We are witnessing a rapid cluttering and securitization of outer space, a “frontier” that once seemed boundless, beyond human reach. Before state and non-state actors continue to dive head-long into this process, they should pause to consider the reality we are facing—a global, quantum space-based effort would put new pressure on an already saturated and precarious potential field of combat. Maintenance of the status quo will push the world into a grey area in both quantum and political science, where the path forward presents risks we are only just beginning to witness and understand. As our satellite capabilities expand and our tethered dependency on these orbital-objects grows, so too does the severity of the threat of their potential interference, blocking or destruction—by accident or by design.

Project Q, Quantum Applications, Quantum Internet

Project Q Interview: Stephanie Wehner on Building a Quantum Internet


No Comments

Feature image via Quanta Magazine

Stephanie Wehner has an impressive resume, to say the least. The German physicist and computer scientist is currently leading Europe’s Quantum Internet Alliance on its mission to build a quantum internet. She is the Roadmap Leader of Quantum Internet and Networked Computing at QuTech, a research centre for quantum computing and the quantum internet at Delft University of Technology in the Netherlands, Co-Founder of QCRYPT (the largest annual international conference on quantum cryptography), and Coordinator for the Quantum Internet Alliance of the EU Flagship, as well as an Antoni van Leeuwenhoek Professor at QuTech, Delft University of Technology. And that is not even mentioning her previous accomplishments and accolades.

We recently sat down with Stephanie to discuss the project’s advancements, future use-values for a quantum internet and the challenging ethics of building a network that will enable un-hackable communications.

The following interview has been edited for clarity.

 

Why are you building a quantum internet?

We are working on building a quantum internet because you can do things with a quantum internet that you cannot do on the internet that you have today.

Of course, the most famous application of quantum communication is secure communications. That’s proof that you can have security that even a quantum computer can never break. But that’s not the only reason why we do it. There are a few other cool things you can do with a quantum internet. For example, if you can imagine that in some years we actually do have quantum computers, then people think the first application of such a quantum computer could be to simulate, say, a new material design. But maybe we will only have this technology here at QuTech and a few other places. One way for you to use such a quantum computer would be to send your material design to us, then we would do the simulation for you and then tell you what the result is. But maybe you don’t want to tell us your material design, given that it might be proprietary. And so the question is, can you perform such simulations and can you use the remote quantum computer in such a way that you don’t have to give away your proprietary design or any other secrets that you want to involve in this computation? And the quantum internet makes it possible to use a very simple quantum device, a quantum terminal, to access a remote quantum computer in such a way that this quantum computer cannot learn what you’re doing. So, it cannot learn what your proprietary material design is, it cannot even learn if you’re doing a simulation or factoring a number—it cannot tell the difference.

There are a few other nice applications. For example, one can synchronize clocks more accurately. One can keep data more efficiently in sync in the cloud. That’s maybe something that is not so obvious to you actually as a user, but you would certainly know if it goes wrong.

Let’s imagine an extreme example: let’s say that you have a million euros in the bank. And the data is, of course, stored somewhere. So, somewhere there’s a database that says that you own that one million euros. So, you can imagine that if you went to the A.T.M. to withdraw money, maybe the system crashes when you withdraw. And usually for redundancy purposes, of course, the data does not exist only in one location because, you know, if the computer burns down, then no one remembers who owns any money. It’s replicated in a few locations. But it might happen that if you don’t employ such consistency protocols, that if your system crashes during withdrawal, then computer one now says you own one million euros and computer two now says you own zero euros. So now the question is, who is correct? So, it’s a very important problem actually to keep data consistent in the cloud so that you don’t run into these kinds of issues.

I understand that one of the most important aspects of a quantum internet is that it will enable ultra-secure communications, which is obviously a huge benefit for state-actors, banks and big corporations. But what are some impacts a quantum internet might have on broader civil society?

I think keeping data consistent, for example, is not something totally big business. I think it’s very difficult to predict the future. The internet that we have today was originally meant to share some files around. And that’s great, but then you might also ask why would I, at home, ever share a file? At that point, in fact, people didn’t even have a personal computer at home, so, what are these files that you’re talking about?

So, we cannot predict all the applications that a quantum internet will have. People have used it also, for example, to cheat an online bridge game with entanglement. Which, of course, is a bit obscure but it may hint that there are many more things one can do with it. But I think if people don’t have access to it, then this will also not come.

A lot of the applications that we run on the internet today were not developed by people somewhere in the 60s where they wrote on the whiteboard and said, “these are all the applications, and now we’re going to build this thing”. But rather, there were people who were engaged with their technology and played around and wanted a social forum and to see whether it could be possible.

To begin with, I know the quantum internet you’re building will have a very limited scope but do you envision this being something that will be accessible to everyone in the future?

I certainly hope so, absolutely. I think the question is just a little bit, when? So, we’re building a small demonstration network here in the Netherlands, where we also have an effort to make it accessible for people. But that will only happen in two or three years because it’s very difficult to have something stable enough that you can begin to do that.

We also already have a quantum internet simulator. It’s a little program that you can install on your computer and you can have something like a pretend quantum internet. And we are using it, actually for a Hackathon next week, together with RIPE NCC (RIPE is the regional internet registry in Europe), and this time it’s actually a pan-European version. So, there will be a few teams across Europe—one here, one at CERN, one in Dublin and a few other places across Europe. And they’re going to basically work together on our “pretend quantum internet” to explore a few things one can do with it.

Given the lessons that we’ve learned from the development of the classical Internet, what sort of legal or ethical challenges do you think future frameworks and regulations for a quantum internet might consider? And are these unique from the challenges that are posed by the classical internet?

To be honest, I think there are other people who are more capable of answering this question. I’m a researcher, I’m not a lawyer and I’m also not a specialist in ethics. Given this position, I can give you a few issues, even though maybe I am partially critical about them myself.

On the one hand, there’s a lot of discussion about standardising various technologies. Which, of course, is very important eventually. On the other hand, I’m also a little bit critical about this because if you start to write standards too early, you constrain the development. Another aspect is the impact of having fundamentally un-tappable communication. That is a question that is maybe not even totally unique to quantum networks. Of course, only quantum networks can deliver fundamentally un-tappable communication, but it also arises to a lesser extent with existing encryption technologies that people might be using.

So, is that a good thing or a bad thing? On the one hand, it’s a very good thing because one can protect government secrets and everyone’s secrets with absolute security. But of course, security always has two sides. If you have a mechanism to make something more secure, it can in principle be used by anyone. It can be used for good, but it can also be used for bad. So that is a little bit of a trade-off between these two things.

I am personally of the opinion that you cannot stop progress. So, you can say, “I’m going to forbid this.” But then people will do it anyway. It’s not possible to forbid technology.

The reason I think a lot of us maybe have some mixed feelings about this is the sense that it’s already super hard to realise that technology. It’s already so hard! So, putting some extra barriers is a very scary thing, right?

There’s been a lot of talk recently about Google’s claim to have achieved quantum supremacy, as you know. But, of course, the reality is that for the most part, quantum computers will work in concert with classical computers, not replace them. In what ways will the quantum internet interact with or rely on existing “classical” technologies?

So, maybe to talk about the term quantum supremacy? In quantum communication, quantum supremacy has been achieved many years ago. Because any QKD implementation basically shows quantum supremacy. So, a quantum internet is not supposed to replace the classical internet but rather, supplement it with some extra functionality that you otherwise don’t have. Because, if you say I’m watching a movie on Netflix, there’s no reason why we would send it via qubits. Maybe in the far future when everything is so far advanced, we could need to do everything in one system. But in my lifetime, I don’t expect this. In all known application protocols for quantum, with a secure communication or say secure quantum computing in the cloud or everything else, you need the quantum network, but you also need to send some classical data around.

So, do the networks overlap or do they sit separately?

That’s a good architecture question. Do they follow sort of the same pattern? They don’t need to follow the same pattern on application level, not at all. On the elementary level—on the control level—whenever you have two quantum nodes that wants to make quantum entanglement, for example, they also need to be able to talk to each other classically, to synchronize.

And is this done using hardware or software?

It’s done through hardware and software, actually, on what is called the physical layer. So next to a quantum channel you always have a classical control channel but it is not visible for the user. But this sort of user-level communication classically could be done also by the standard internet and next to the quantum topology.

What kind of support has this project received on a local, national and regional level as well as privately?

We have a lot of support from the from the Netherlands, actually, both through QuTech, which is a national Icon program from the Ministry of Economic Affairs in the Netherlands and also NWO, which is like the Dutch NSF of the U.S. We also have some amount of research funding from the EU, both from the European Research Council and to a lesser extent from the quantum flagship, which is the EU initiative. We are also the coordinator of the European Quantum Internet Alliance where we work together with some other nodes in Europe.

We also have various industry engagements, for example, we work together with KPN, which is the Dutch Telecom. We also talk to a lot of parties in the classical domain. For example, The Hague Security Delta, which is sort of an umbrella organisation of 80 security companies in the Netherlands. That’s very convenient for us because we don’t have to talk to each of them individually. So that’s very valuable for us. We also talk to a few other private entities in the Netherlands. We also have relations with industry partners on the component level, for example, with Toptica who makes laser systems. Then there’s OPNT that does timing control, JPE that does stabilisation. So, this is on the component level where we work with a lot of industries to do specific things for our quantum network. We also work with industry which is more interested in the use case if you were to go to the other extreme of the spectrum. For example, with SAP, which is a German software company. With these companies, the interest is more about what you can do with the technology.

Another useful thing to mention is that there is also RIPE NCC, which is the regional internet registry of Europe. And that’s actually pretty cool for us because they’re an organisation that brings together all the large telecom operators and internet providers in Europe. They are responsible for managing the numbers on the internet and there cannot be a computer anywhere in Europe that does not have a number from RIPE. But they also do a lot of community development and education of their members.

I know you set the deadline for end of 2020 to have this completed, how’s your progress tracking now?

So, we will have one link by 2020 but we do not have the four nodes yet. We want to have three in 2021 and maybe all four in 2022.

Have there been any surprise challenges that have created this delay?

Of course, there are some technical challenges which took us longer. And, of course, there were also some mundane challenges. We have also decided we would like to deviate from the four-city plan because we would like to put one node somewhere we can physically access it. Previously, we had said we’re going to put it in Leiden, Amsterdam, Delft, The Hague. But then we were thinking that somewhere in the building in Leiden there could be a node, but that it would be in one of these KPN-style buildings where no one can go in. So, this is why we want to put one of the nodes somewhere where you can actually see it. That might happen either here [in Delft] or in The Hague or in Rotterdam, we haven’t quite decided yet. The idea is that you would really have a terminal where you can see the node, otherwise you just have to believe us, right? We tell you, we promise that the node’s over there!

Quantum Applications, Quantum International Relations, Quantum Internet

The ‘Who, What, Where, When and Why’ of a Quantum Internet


2 Comments

With all of the recent hype about quantum supremacy, it’s easy to forget that quantum supremacy in communications was demonstrated years ago. One of the most exciting developments on the horizon for quantum communications is a quantum internet, which will securely transmit quantum information. Like most things quantum, the label of “quantum internet” has been slapped on to a quantum technological application, establishing a concept that is easily consumable for the masses, which helps to create the hype that keeps funding for that application flowing. The reality, as is often the case, is much more complex.

In fact, just about the only thing that scientists agree on is that the term “quantum internet” does not have an agreed-upon definition. That is because the technology required to manifest this reality is still in its infancy. Scientists around the world are working hard to change that. Perhaps the most well-known is Stephanie Wehner of Delft University of Technology. In preparation for the release of Project Q’s interview with Wehner on this topic, we reflect on the current stage of global development of a quantum internet.

Since 2004, the security afforded by quantum communications has been proven superior by a method known as quantum key distribution (QKD). QKD is a system employed to produce and distribute a secret key that can then be used to encode classical information. This method has since been employed by a number of actors across both private and public sectors, including banks and national security networks. It does not, however provide a secure link by which quantum information can be transmitted. Enter one important motivation for a quantum internet: to create a network of quantum nodes that enables the secure transfer of quantum information. Of course, there are a diversity of useful applications for such a network and many more still which will develop as the technology matures. One needs only to recall the history of the classical internet, for which the first projected use-value was extremely narrow, to imagine the breadth and depth of applications that will surely follow once the technology is functional.

However, a salient challenge for researchers working on a quantum internet remains. Like the classical internet, a quantum internet requires a physical infrastructure in order to function. There have been a diversity of approaches to this complex problem, from diamonds to crystals and drones to satellites. For the most part, however, the emerging dominant systems rely heavily on land-based fibre-optic cables, with some major differences between them.

In 2016 China launched their quantum satellite, Micius, as part of their Quantum Experiments at Space Scale (QUESS) project. Within a year of the satellite’s launch, major goals paving the way for a quantum internet had been achieved by a multi-disciplinary, multi-institutional team from the Chinese Academy of Sciences, led by Professor Jian-Wei Pan. These ground-to-satellite quantum communication advances included the impressive feat of establishing a quantum-secure communication spanning the longest distance yet between two different points on the globe (Beijing and Vienna) via Micius. Recently, China has also constructed the largest fibre-based quantum communication backbone, known as the Beijing-Shanghai quantum link, which stretches a distance of over 1,200 miles. However, while the link is already in use by some of China’s biggest banks to transfer sensitive data, it is not fully quantum-secure (more on that shortly).

While we have known that quantum communication is theoretically possible for some time, China has been the first country to focus its research apparatus on the challenge, building the first dedicated, large-scale infrastructure for the task. From a security perspective, this is a strategic move on China’s part. The focus on quantum communications is a pre-emptive defence mechanism to combat U.S. advances in the quantum computing space. Regardless of the development of computers, which will be capable of hacking any classical communications, a quantum-secure network will be act as a safeguard against prying eyes and ears. As a result, China continues to be a world leader in this space. However, Europe is hot on its heels and lining up to take the cake for the next big development in quantum communications: creating a functioning quantum internet.

You may have heard of the work being done to build a quantum network in the Netherlands by a team of researchers at the Delft University of Technology. Much like China’s Beijing-Shanghai quantum link, the Delft team is constructing a link between four major cities in the Netherlands, stretching from Delft to Amsterdam.

The main difference between the China quantum link and the one being built by Wehner and her team is that the Chinese infrastructure, while greatly improving upon most current cybersecurity capabilities, is still susceptible to hacking. Theoretically, a genuine quantum link will provide un-hackable connection across large distances. The Chinese system relies on 32 nodes across the link in order to transport quantum information, which is carried in photons, or light particles. Each of these nodes is susceptible to hacking because they serve as points where the information must be decrypted and then re-encrypted before the information continues its journey along the link. The system was constructed in this way because quantum information carried in photons can only travel through about 100 miles of fibre-optic cable before it begins to dim and lose data.

A solution to this problem, which Stephanie and her team have incorporated into their design from the outset, and which the Chinese team is beginning to work with as they improve their own link, is the use of quantum repeaters. This is how they work:

A quantum repeater essentially serves the same purpose as an ordinary relay node, except it works in a slightly different way. A network using quantum repeaters is shaped more like a family tree than a linear chain. In this family tree-shaped game of telephone, the quantum repeater is the parent who distributes identical pairs of quantum keys between two children, therefore doubling the possible distance between users. Moreover, these “parents” can also have their own “parents,” which can then double the key-sharing distance between the children at the bottom for every extra level created atop the family tree. This in effect increases the distance a quantum message can be sent without ever having to decrypt it.

An illustration of the type of quantum network being built by the Delft team.

Alongside their use of quantum repeaters, which provide an infrastructure to teleport the quantum entangled information across the link, the Delft team incorporates the use of quantum memories as an essential element in ensuring the information’s hyper-secure journey. Quantum memories store the entangled information in between the repeaters. They are critical because they enable the network to store the quantum information while the next entangled link is prepared, rather than measuring it and thus potentially destroying it. A system enabled by quantum repeaters and quantum memories eliminates the need to incorporate weak security points in the system where the quantum information is decrypted and then re-encrypted, or potentially destroyed.

Though significant challenges remain for researchers working to build a quantum internet, international efforts become more and sophisticated with each passing day, bringing the world closer to potential quantum network connectivity. While it is being built to supplement certain capabilities of the classical internet, some believe that eventually, the quantum internet will even overtake the classical. Most agree, however, that this will not be a reality even in our lifetime. After all, as Wehner commented in a recent interview with Project Q for our upcoming publication, you don’t really need a quantum internet to watch Netflix.

Tune in next week to read our exclusive interview with Stephanie Wehner, where she updates us on the project’s advancements, answers questions about future use-values for a quantum internet and addresses the challenging ethics of building a network that will enable un-hackable communications.

Quantum Computing

All Hype and No Game? Google, IBM, Preskill and Quantum Supremacy


1 Comment

Feature image: IBM’s System One quantum computer, via The New York Times.

Words are important. The language we use to describe something creates a discourse around it, endowing it with a life of its own and often producing meaning beyond the definition of the words themselves. Like many expressions in the emerging technology space (think: disruptive innovation, IoT or even artificial intelligence) the expression ‘quantum supremacy’ has been over-used and misused to a point where the original relevance of the term has been buried under a mountain of hype. Has this expression been transformed into an empty buzzword through this process—like a game of ‘telephone’, its meaning distorted along the way? Now, with Google’s proclamation of achieving quantum supremacy officially published, criticism has been flooding in; not only with regard to the claim itself but more fundamentally, of the usefulness of ‘supremacy’ as a benchmark at all.

Just last week IBM, one of Google’s main competitors in the quantum space, posted a critique of Google’s shrouded announcement on their blog. The article, penned by Edwin Pednault, John Gunels and Jay Gambetta—leaders and members of IBM’s quantum research and development team—urges that Google’s achievement should not be misconstrued ‘as proof that quantum computers are ‘supreme’ over classical computers.’ The team takes issue with the way in which ‘quantum supremacy’ has come to imply that quantum computers will one day reign ‘supreme’, replacing classical computers. They assert that the future of quantum computing will be intertwined with classical computing, the two uniquely-suited systems will work in concert with one another.

IBM’s statement adds further complexity to the situation, citing that Google may not have even achieved quantum supremacy at all, according to John Preskill’s original definition of the term, ‘to describe the point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful.’ The point of contention: Google has stated that their Sycamore processor was able to compute the answer to a specific problem in about 200 seconds, a task they claim would take 10,000 years to compute on the world’s most powerful supercomputer. IBM argues, however, that the comparison does not account for any of the unique capabilities of classical computing, which if properly leveraged, could bring that number down to just 2.5 days. While that’s a significant difference in estimation between Google and IBM, worthy of headlines itself, it should be noted that neither team has actually tested the calculation on a supercomputer yet.

Whether or not the disparity in time to compute between quantum and classical is measured in days or years, the fact remains that what Google demonstrated on their quantum processor can also be done on a classical computer, albeit much more slowly. Still, as Preskill commented in a recent interview for Quanta Magazine on the subject, Google’s achievement is significant in that it demonstrates the team understands the hardware they have built and that it is working. When stripped of the hype associated with the terminology, this may seem a far more lacklustre news item. Nonetheless, it is an important step forward in the development of useful quantum computing applications, which is where the real fruits of the industry’s labour will finally be tasted, across areas as diverse as healthcare, hacking, and honing.

As Project Q and others have explained in coverage of Google’s announcement, the terminology used can be misleading at best and simply a product of the media hype machine at worst. But hype has its value too. Hype can insulate against the arrival of a quantum winter—a scenario where interest and investment in quantum technologies drop off due to the technology’s failure to deliver on its promise. There is concern that a quantum winter could mean the technology never reaches the immense promise of its applications. However, private funds continue to flood the industry. According to research by Nature, ‘in 2017 and 2018, companies received at least $450 million in private funding—more than four times the $104 million disclosed over the previous two years.’ While hype contributes to potentially overstating the promise of quantum computing in the first place, it also plays a critical role in pushing the development of quantum computing forward by keeping the buzz and enthusiasm high.

It’s important to recognize the role of hype in technological progress. It is also important, however, to have access to a more nuanced understanding of the progress of quantum development, to dive deeper than the terminology and the hype around it. Without a doubt, the terminology used to dictate a turning point in the development of quantum computing is problematic, even according to its creator. This was also IBM’s central point in their argument: ‘we urge the community to treat claims that, for the first time, a quantum computer did something that a classical computer cannot with a large dose of scepticism due to the complicated nature of benchmarking an appropriate metric.’ Quantum supremacy, while a significant step in quantum’s development, is by definition an incredibly narrow benchmark with practically no real-world utility. However, it has value in its ability to capture the imagination of society and keep people engaged in the progress of one of tomorrow’s most promising technologies.

Quantum Theory

Quantum Superposition Bridges the Classic World


No Comments

Cracks have begun to show in one of quantum’s biggest controversies. The well-known Schrödinger’s cat thought experiment, which sought to illustrate the absurdity of applying quantum theory to the macro-physical world ruled by classical physics has been challenged by a recent advancement in quantum physics. An international team, led by Markus Arndt of the University of Vienna, successfully placed a large molecule of 2,000 atoms—the biggest object yet—into a state of quantum superposition. The advancement shows that quantum effects can be translated into the classical world, establishing the foundations for scientists to continue to demonstrate how the gap between these seemingly disparate worlds might be reconciled.

Quantum theory tells us that particles in superposition can shift between a wave-like state and a particle-state, meaning they can be in two places at once. Of course, from what is observable in the classical world, this cannot be true. If it were, our understanding of what we understand to be “real” would be challenged, opening the door for a whole host of quantum weirdness that classical theory keeps at bay. Essentially, as Schrödinger tried to prove with his thought experiment, if quantum mechanics is reflected on a macro-physical scale, it signifies that human beings could also exist in two places at once. It does not take long for this information to snowball into theories of time travel and multiple worlds, both of which find basis in quantum theory.

On a fundamental level, the new work published in Nature illustrates that the multi-state paradox of quantum mechanics, known as superposition, functions on a larger scale than previously demonstrated. In theory, we already knew this to be true, but the experiment proves it at the largest scale yet, having only been demonstrated previously using the smallest possible particles; atoms, photons and electrons. The experiment used by Arndt and his team, essentially a souped-up the double slit experiment, has been used regularly since 1801 in quantum mechanical experiments to observe the effects of superposition.

The simple experiment involves particles of light (photons) beamed toward a barrier with two slits in it. On a screen behind the barrier, the effects of quantum superposition are displayed in the form of what is known as an interference pattern. It looks something like this:

This striped pattern that results is interesting, as one might assume that a single beam of photons would produce a representative pattern of a solitary line, indicating their fall along a single path. However, the striped pattern that is produced shows that all of the photon’s possible paths are taken and eventually interfere with each other, suggesting the particle in fact also acts as a wave. This describes the probabilistic nature of quantum phenomena, challenging Einstein’s famous claim that “God does not play dice with the universe”.

In order to pull their super-sized version of this experiment off, the international team had to create not only the perfect environment but also synthesized the massive molecule itself in order to ensure it met the requirements for complex quantum activity to occur. The team built a custom interferometer—which, as the name suggests, is a tool that works by merging two or more sources of light in order to create an interference pattern—called the Long-Baseline Universal Matter-Wave Interferometer (LUMI). The team’s LUMI also beats a record: it is the longest interferometer ever built, with a baseline length of 2 metres. Use of this specialised machine permitted the researchers to fire the beam of heavy molecules (some more than 25,000 times the mass of a hydrogen atom) at the multiple-slit apparatus and observe the resulting interference pattern, confirming the molecule’s state of superposition.

With records being broken in the quantum space with what feels like near-weekly regularity, this advancement especially marks a unique turning point in the disagreement between quantum mechanics and general relativity. These two frameworks we use to understand the world around us have come as close to being bridged as ever before. While the success of this experiment does serve to wedge the door open for a number of seemingly bizarre theories like time travel and multiple worlds, it is doubtful that human beings or planets will be time traveling through multiple realities any time soon, if ever. However, this new, scalable research pushes the limit that scientists seek in quantum interactions of superposition further along, enabling and encouraging future research to continue to explore these limits.

Quantum Applications, Quantum Computing

Transforming Drug Development: A Critical Role for Quantum Computing


No Comments

Feature image via Nature

With news of Google’s possible achievement of quantum supremacy, quantum computing’s promise in a diversity of fields grows ever-more tangible. Drug discovery is just one of a number of areas in which quantum computing is expected to play a disruptive role. On average, it takes over ten years and billions of dollars to bring a potentially life-saving new drug to market. Quantum computers promise to revolutionize the currently expensive, difficult and lengthy process of drug discovery and development, by expanding the search for new chemicals to treat some of the world’s most deadly diseases, speeding up the creation of new drugs and cutting the costs of their development. At this prospective turning point in the advancement of quantum computing, Project Q takes stock of quantum applications in drug research and development.

Currently, researchers rely on computer models and simulations (M&S) to analyse how atoms and molecules behave, in order to develop drugs that will have optimal positive effects and minimal harmful ones. However, while of critical value to this process, today’s M&S tools quickly reach their limits of utility in the complex and computationally intensive process of molecular simulation. The goal of molecular simulation is to find a compound’s most stable configuration, known as its ground state. In order to do this, researchers use M&S systems to simulate the interactions between each of that compound’s electrons, in each atom, in order to test how they will react to one another. This is a fairly straight-forward task, as long as the molecules being tested are simple enough. However, even today’s most powerful supercomputers are only capable of simulating molecules of up to a few hundred atoms, limiting their calculations to only a small fraction of all chemicals that exist.

For a whole host of larger molecules that could be used to make new, life-saving drugs, researchers currently have no better option than to approximate how a molecule may react and then test its behaviour in trials. This process is incredibly inefficient and about ninety percent of drugs that do reach clinical trials fail during the first phase. Adding to this complexity, M&S methods are unable to calculate the quantum interactions that contribute to determining the characteristics of a molecule. A technological update in drug discovery is long-overdue.

Ultimately, the main technological limitation facing drug research and development today is that classical computers lack efficacy in what is known as optimization problems—finding the best solution by testing all feasible solutions—a process which is incredibly time and energy intensive. Quantum computers, in theory, are extremely good at optimization problems. This is due to their ability to leverage parallel states of quantum superposition, which enables them to model all possible outcomes of a problem at once, including the quantum interactions that happen on a particle-level. Theoretically, as they reach their promised computational capacity, quantum computers should be able to rapidly process mass amounts of data.

In 2017, IBM Q researchers achieved the most complex molecular simulation ever modelled on a quantum computer, proving the potential use-value for quantum computers in the pharmaceutical industry. The research suggests that if applied to drug discovery, quantum computers could model and test new drugs through molecular simulation far more comprehensively and much quicker than classical computers, effectively slashing the costs of novel drug research and development. Aside from empowering researchers to discover new treatments for a range of diseases, quantum computing could also help bring new drugs to trial more quickly and improve the safety of trials.

Already, innovators and researchers working on quantum applications in drug development are making waves in the pharmaceutical industry. Abhinav Kandala, part of the IBM Q team that simulated the largest molecule on a quantum computer back in 2017, has continued to push the boundaries of quantum computing in order to make it more applicable to industry, faster. His work focuses on a major challenge in quantum computing: improving accuracy. Quantum computers are still drastically error-prone in their current stage, hampering their utility for application in drug discovery and development. One of the MIT Technology Review’s 35 Innovators Under 35, Kandala has demonstrated how quantum errors can actually be harnessed in order to boost accuracy in quantum computations, regardless of the number of qubits. General advancements in quantum computing like this could help to bring the benefits of quantum computing to industry sooner.

There are number of young companies emerging in the pharmaceutical research space, looking at the computational boost and projected accuracy that quantum computing could lend to a range of challenges in diagnostics, personalised medicine and treatments. As quantum computers are not yet advanced enough to stand alone, most of these global start-ups rely on a blend of emerging and classical technologies. Especially prominent is the blended technological approach in machine learning and quantum computing, a topic we have previously explored here.

Another of the MIT Technology Review’s 35 Innovators Under 35, Noor Shaker leads a company that is harnessing these two e(merging)-technologies in order to speed up the creation of new medicines. Her company, GTN LTD, is producing technology that layers the processing power of quantum computing with machine learning algorithms to sort through mass amounts of chemical data in search of new molecules that could be used in disease treatment and prevention. Using this method, GTN (a Syrian female-run company) hopes to build a critical bridge in healthcare that could help to lessen the gap in access and quality of healthcare for people living in developing countries. GTN LTD’s application of these two technologies is just one example of the numerous ways in which they could be used to create and spread benefit across global healthcare systems.

Machine learning projects are already being implemented as part of a growing trend in digital healthcare, providing a helpful starting point for discussion of how other emerging technologies like quantum computing could also impact the sector. A recent article in Nature explores how even the most well-meaning of artificial intelligence (AI) applications in healthcare can lead to harmful outcomes for certain vulnerable sectors of society. The examples investigated by author Linda Nordling demonstrate the need to apply a careful social-impact and sustainability methodology throughout the process. As Nordling explains, many machine learning-based projects in healthcare can reinforce inequalities rather than help to level the playing field, if equity is not a factor that is thoroughly considered and addressed throughout the entire research and development process.

Of course, every technology is different. The challenges confronting AI applications in the healthcare sector may not translate directly to the risks that quantum computing could pose. However, there are certainly lessons to be learned. For all emerging technologies, there is equal potential to help lessen the gap between the rich and the poor as there is to widen it. The direction of development toward the helpful or the harmful hinges on many factors, including accountability and regulation. Fundamentally, the incorporation of a methodological focus on equity and inclusion, from the inception to the employment of an emerging technology, is critical.

The application of quantum computing in drug discovery is no exception to this rule. The development of this emerging technology, both alone and in concert with other technologies, has the potential to make a significantly positive impact on society. With proper care taken to ensure ethical research, development and application, the trickle-down effects of the quantum revolution could improve the lives of many. It is thus imperative that we seek to understand the impacts that quantum computing could have in the pharmaceutical industry if we want to ensure its potential to help discover cures to intractable diseases like cancer and Alzheimer’s becomes a benefit that is distributed equitably across the globe. This is not a problem for quantum to solve, but for society.

Quantum Theory

Why Has Science Stopped Trying to Understand Quantum Theory?


1 Comment

Feature image via BBC Future

It is practically a truism that no one really understands quantum mechanics; yet quantum theory reigns as the dominant theory of the very small just as relativity does of the very large. This remains a paradox for a theory so fundamental that it provides the basis for our current understanding of phenomena such as atom decay and why stars shine, as well as how lasers and transistors work, which are embedded in just about everything. Physicists use and rely on quantum theory to predict the outcomes of experiments but they have stopped, as Dr Sean Carroll asserts in a recent op-ed for the New York Times, trying to understand how it works. While there are many contributing factors to this conundrum, a salient inhibitor is the siloed way in which we have come to think about the discipline of science and the way that the modern academic system reflects and perpetuates this way of thinking.

The barriers to understanding quantum begin with the fact that there are some truly sticky bits of theory that simply cannot be accounted for with our existing scientific frameworks. One such example is quantum’s measurement problem: a quantum waveform exists in all states of superposition until it is observed or measured, wherein it collapses into a single position. The fundamental challenge posed by this problem is that science supposes the existence of a measurable, objective world. The centrality of the role of the observer in quantum interactions defies this assumption by asserting that reality is observer-dependent and therefore non-fixed. This idea alone confronts science in a fundamental way, requiring an interpretation of reality that holds space for the “weird” and the “strange” of quantum mechanics—something that mathematics alone has not yet been able to provide.

This issue in quantum physics ignited a deep rift across the brightest minds of physics during the mid 1920s. Albert Einstein, representing the side of the argument which rejected the proposal that the quantum world could be characterized by probabilities rather than certainties, is famously quoted as claiming, “God does not play dice with the universe”. The interpretation of quantum mechanics that prevailed is the Copenhagen Interpretation, which asserts the rather less-than-satisfying conclusion that we simply cannot know more about quantum mechanics than what we can measure using equations. Understanding the theories was thus placed in the “too hard” basket.

Still, divergent theories from quantum’s inception into the 1950’s have attempted to make sense of this phenomenon. These theories had an undeniably philosophical bent and resulted in the majority of postulators being shunned from science altogether. In 1957 for example, Hugh Everett constructed a theory to account for quantum superposition with his Many Worlds Interpretation (MWI). Essentially, Everett’s MWI proposes that for every state of an atom’s superposition in a quantum interaction, that atom simultaneously takes each potential path, creating multiple, coexistent physical realities for each state. Mainstream physicists ridiculed Everett for what they considered to be a scientifically blasphemous postulation, a fact which no doubt contributed to his transition from science to defence analytics shortly after he submitted his dissertation.

Scientists’ resistance toward a multidisciplinary understanding of a scientific problem, however, is a relatively new phenomenon. For centuries, the disciplines of science and philosophy were taken in unity. In fact, the term ‘scientist’ was not even coined until the 19th century. Before that, great names such as Galileo and Newton considered themselves ‘natural philosophers’ rather than ‘scientists’. Even Einstein, Hiesenberg, Dirac and their cohort, the fathers of quantum mechanics, were schooled in European philosophy. This deep understanding of both the “soft” and “hard” sciences influenced their way of thinking, the types of questions they posed and ultimately the theories they constructed. As such, it enabled them to think beyond the boundaries of what was generally accepted information at that time and allowed them to construct new ideas that came to be known as fact.

However, since an epistemological transformation in the 1600-1700’s, which produced the distinction of “science” as the empirical investigation of phenomena, science and philosophy have become increasingly separate disciplines. While it has been a gradual process, this disciplinary divorce has become ingrained in society with the help of most knowledge institutions worldwide, culminating in the propagation of an isolationist understanding of these and other disciplines. This poses a significant challenge to the kind of fruitful multidisciplinary thinking that conceived nearly all of science’s greatest discoveries to date.

Beyond reifying the isolation of disciplines through course structures, universities also play a significant role in shaping academic discovery by prioritising certain areas of research over others. As Carroll elaborates:

“Few modern physics departments have researchers working to understand the foundations of quantum theory. On the contrary, students who demonstrate an interest in the topic are gently but firmly — maybe not so gently — steered away, sometimes with an admonishment to “Shut up and calculate!” Professors who become interested might see their grant money drying up, as their colleagues bemoan that they have lost interest in serious work.”

This situation is compounded by the fact that the metrics by which academic researchers are hired, retained and promoted has undergone a transformation over the last half-century. During this time, research culture has been impacted drastically by the dawn of the Internet, which has enabled an open and thriving, digital research economy. At the same time, an associated shift in focus towards metrics of productivity, quantified largely through research output, has become dominant across knowledge institutions. These changes frame the pervasive expectation that academic researchers should devote a majority of their time to publishing on certain topics and in certain journals in order to remain relevant and successful. Among other challenges, this focus on publication as a distinct metric of notoriety in the academic sciences has led many to game the system, with the resultant focus on quantity of output often detracting from the quality of output.

Aside from this, this phenomenon which has become known in academia as the “publish or perish” culture—that is, the pressure on academics to continuously publish work in order to sustain and further their career in academia—has left academic scientists with little spare time for creative thinking. This modern academic environment has been lamented by Peter Higgs, the scientist who discovered Higgs Boson, who doubts he could have achieved that breakthrough in today’s current academic system:

“It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964,” Higgs said. “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.”

Explorative and imaginative thought requires both ample time and space as well as the expectation that by nature of trying new things, it is likely the researcher will encounter far more twists, turns and dead-ends than solutions. While these qualities do not fit well into the “publish or perish” framework, it is well-established that they are of critical value to innovation. Discovery demands that we challenge the very prejudices that have become ingrained in our conceptual structures. In order to do this, one must have the freedom and encouragement to shatter these, rather than be required to work within systems that reinforces them.

Artificial Intelligence, Quantum International Relations, Quantum Research

India Races Toward Quantum Amid Kashmir Crisis


No Comments

Amid troubling news of serious human rights violations carried out in India-controlled Jammu and Kashmir—including a debilitating digital blockade lasting over two weeks—Indian Prime Minister Narendra Modi signed an agreement with France for a landmark technological collaboration in quantum and artificial intelligence (AI). The Indo-French collaboration between French company Atos and India’s Centre for Development of Advanced Computing (C-DAC) will establish a Quantum Computing Experience Centre at C-DAC’s headquarters in Pune, India and deliver an Atos Quantum Learning Machine. The high technology partnership, which “advocate[s] a vision of digital technologies that empowers citizens, reduces inequalities, and promotes sustainable development”, sits upon the controversial backdrop of India’s current actions in the Kashmir crisis and presents an interesting view into the intersection of international politics and quantum technologies.

During his first term, Narendra Modi began to position India as a global technology hub, putting its innovation sector on the map by embracing international investment and collaboration. The advancements that have been made over the last five years as a result of these activities have helped to fuel India’s socioeconomic development and cement its place on the global stage as a major emerging economy with a vibrant technology sector. Now in his second term, Modi seeks to apply a digital taxation to global technology giants like Google and Facebook on their activities in India. Though this policy shift has been identified as a potential barrier to Big Tech’s incentive to contribute to India’s start-up space, Modi has nevertheless continued to cultivate a tech-forward name for his government. His “New India” government focuses on sustainable development and emerging technologies, especially agricultural technology, AI and quantum.

Within this context, India’s national quantum technology research and development capacity has blossomed at a rapid pace, especially with regard to quantum mechanical theory and theoretical physics research and software development. However, unlike the top competitors in quantum computing such as China and the U.S., India lacks a strong quantum computing hardware industry, a challenge which could be exacerbated by Modi’s Big Tech taxation policy. In order to supplement research activities in its burgeoning quantum and AI sectors, Modi has instead turned toward collaboration with international governments as a vehicle to boost domestic technological development. For example, India’s recently established fund-to-fund partnership with Japan will support over 100 start-ups in AI and IoT. Likewise, the new Indo-French partnership is a critical piece of the puzzle for India, promising to help boost its national deficiency in applied quantum computing development and help India to become a leader in the quantum space.

With international partnerships playing such a key role in Modi’s plan for the development and growth of India’s quantum computing and AI industries, there is a sense that the country’s actions in state-controlled Jammu and Kashmir are damaging its international reputation. This perspective, however, is demonstrably negated by the signing of the Indo-French bilateral agreement. The agreement, which stipulates French alignment with India as a partner in sustainable development and emerging technologies, outlines the countries’ shared commitment to “an open, reliable, secure, stable and peaceful cyberspace”. It was signed into existence even as India, the world leader in internet shutdowns, enacted a digital lockdown on Kashmir for the 51st time in 2019 alone. This data sits in stark contrast to the stated objectives of the partnership and demonstrates the separation of business from peace-building priorities on an international scale.

The Kashmir conflict, a turbulent territorial dispute between India, Pakistan and China, dates back to the partition of 1947 and has already incited four wars between India and Pakistan. Kashmir, dubbed one of the world’s most militarized zones, is of strategic value to both countries and is India’s only Muslim-majority region. The recent conflict was spurred by a series of brutal attacks and rebellions since February 2019, which ultimately led the Modi government to revoke India-controlled Kashmir’s “special status” of autonomy granted under Article 370 of the Indian constitution. Given this complex history and characterization, India’s fresh assault on the region has led many (including Pakistan’s own Prime Minister) to fear an escalation of violence that could result in a worst-case-scenario nuclear face-off between India and Pakistan.

Whether or not it is representative of the true feelings of Modi’s “New India”, Indian national media has expressed nearly unequivocal supportive of the revocation of Article 370. French comments, however, lean toward neutrality—tactfully holding the situation at arm’s length while urging for a bilateral negotiation between India and Pakistan. Regardless of the two countries coming to a peaceful resolution or not, it appears that international investment in Indian quantum and AI development shall not waver in the face of the Kashmir conflict. Ironically, as India sprints to catch up in the quantum race with the support of France and other international allies, the results of the past technological nuclear arms “race” looms heavy over the continent.

Quantum Internet, Quantum Research

Quantum Teleportation: Paving the Way for a Quantum Internet


2 Comments

Last week’s big quantum news centred on two proof of concept studies, both of which claim to have achieved quantum teleportation using a tripartite unit of quantum information called a qutrit, for the first time. While quantum teleportation has been demonstrated previously, it has only been carried out with qubits, which are capable of storing less information than qutrits but thought to be more stable. The novel feat was achieved independently by two teams, one led by Chinese physicist Guang-Can Guo at the University of Science and Technology of China (USTC) and the other, an international collaboration headed by Anton Zeilinger of the Austrian Academy of Sciences and Jian-Wei Pan of USTC. While both teams have reported their results in preprint articles, the article by the Austrian-led team has been accepted for publication in Physical Review Letters.

Competition for credit of this achievement aside, the team’s findings ultimately support each other in substantiating an advancement in quantum teleportation theory: namely, that quantum networks should be capable of carrying far more information with less interference than previously thought. This advancement—like many in the world of quantum—is likely to be found most exciting for physicists, evading the grasp of an applied significance for those of us with less scientific minds. Nevertheless, the notion of quantum teleportation has once again grabbed headlines and imaginations, providing a good opportunity to explore the concept and the applied significance that advancements like this might eventually have on our world.

While it may sound flash, quantum teleportation is an affair less akin to science fiction than one might imagine. On a basic level, quantum teleportation differs from ‘Star Trek teleportation’ because it is used to transmit information rather than macroscale physical objects, like human beings. This is possible because of quantum entanglement, a phenomenon of quantum physics that allows us to look at one particle or group of particles and know things about another, even if those particles are separated by vast distances. Quantum teleportation relies on entanglement to transfer information based on this shared state of being demonstrated by entangled particles. As such, quantum teleportation can be defined as “the instantaneous transfer of a state between particles separated by a long distance”.

Quantum teleportation holds the most obvious promise in the discipline of quantum communication, where its impact in secure communication was established as early as 1997. In 2017, Chinese scientists working with a team in Austria made waves with their announcement that they had achieved transnational quantum teleportation, establishing a quantum-secure connection for a video conference between the Chinese Academy of Sciences in Beijing and the Austrian Academy of Sciences in Vienna, some 7,600 kilometres away from each other. The experiment utilized China’s Micius satellite to transmit information securely using photons. Micius is a highly sensitive photon receiver, capable of detecting the quantum states of single photons fired from the ground. These photons, beamed via Micius, acted as qubits, allowing researchers in both countries to access a shared quantum key and thus enabling them to participate in the quantum-encrypted video call. Critically, should the data have been accessed by a third party, the code would be scrambled, leaving evidence of tampering for researchers at both ends of the connection.

This experiment, facilitated by quantum teleportation, proved two fundamental and impactful theories in quantum physics: that quantum communication can provide a previously unfathomable level of security and that it is capable of doing so on a global scale. Given these capabilities and coupled with the new qutrit proof-of-concept work, the realm of applied possibilities for quantum teleportation is expanding.

Aside from ultra-secure, transcontinental video conferences, one very hyped application for quantum teleportation is in the development of a hyper-fast quantum internet. Due to the entangled state of particles, information is transmitted instantaneously in quantum teleportation—faster than the speed of light. However, the transfer of this information is still required to operate within the current confines of classical communication. As such, even quantum information must travel through ground-based fibre optic cables or via photon-sensitive space-based satellites, like China’s Micius. This infrastructure is both expensive and potentially expansive, posing a formidable challenge for a global roll-out of a quantum internet. Still, these early experiments have laid the groundwork for the development of a quantum-secure Wi-Fi by putting theory to the test and producing promising results.

Currently, a team of researchers at Delft University in the Netherlands is working to build a quantum network, using quantum teleportation as the mode of transport for information between linkage points. The project, which aims to connect four cities in the Netherlands, is scheduled for completion in 2020. In China too, researchers are constructing the backbone for a quantum network to connect Beijing and Shanghai. Aside from the support of private corporations such as banks and other commercial entities, progress on the concept of both localised and international quantum networks has been spurned by pressing anxiety about global levels of cybersecurity

A critical advantage to a future quantum internet is the enhanced security afforded by quantum teleportation—the ability to create an unhackable connection. This could have serious implications for national security and could present a potential solution for many foreign surveillance and interference challenges that countries face today. For example, it is now public knowledge in the U.S. that Russia has the demonstrative ability to directly interfere with most paperless voting systems. While states are currently reticent about making changes to the current U.S. vote-casting system, alternatives are slowly being considered—from regressive paper ballot casting to progressive blockchain applications—in order to safeguard American votes against hacking efforts. Quantum teleportation could offer an interesting alternative in this space as the technology continues to develop.

Though quantum teleportation will not be transporting human beings between planets any time soon, it will play a key role in ushering in an internet revolution. While it remains to be seen exactly how that revolution will play out, it is clear that it will bring an unprecedented level of security and speed to global communications. It is also apparent that the level of interest in the secure and high-speed communications afforded through quantum teleportation is broad and deep, spanning both public and private sectors across the globe. Quantum teleportation has recently seen a number of experimental proofs, pushing the field of quantum communications to the fore of quantum development and promising to deliver a much sought-after security transformation within the decade.