Project Q, Quantum Applications, Quantum Internet

Project Q Interview: Stephanie Wehner on Building a Quantum Internet


No Comments

Feature image via Quanta Magazine

Stephanie Wehner has an impressive resume, to say the least. The German physicist and computer scientist is currently leading Europe’s Quantum Internet Alliance on its mission to build a quantum internet. She is the Roadmap Leader of Quantum Internet and Networked Computing at QuTech, a research centre for quantum computing and the quantum internet at Delft University of Technology in the Netherlands, Co-Founder of QCRYPT (the largest annual international conference on quantum cryptography), and Coordinator for the Quantum Internet Alliance of the EU Flagship, as well as an Antoni van Leeuwenhoek Professor at QuTech, Delft University of Technology. And that is not even mentioning her previous accomplishments and accolades.

We recently sat down with Stephanie to discuss the project’s advancements, future use-values for a quantum internet and the challenging ethics of building a network that will enable un-hackable communications.

The following interview has been edited for clarity.

 

Why are you building a quantum internet?

We are working on building a quantum internet because you can do things with a quantum internet that you cannot do on the internet that you have today.

Of course, the most famous application of quantum communication is secure communications. That’s proof that you can have security that even a quantum computer can never break. But that’s not the only reason why we do it. There are a few other cool things you can do with a quantum internet. For example, if you can imagine that in some years we actually do have quantum computers, then people think the first application of such a quantum computer could be to simulate, say, a new material design. But maybe we will only have this technology here at QuTech and a few other places. One way for you to use such a quantum computer would be to send your material design to us, then we would do the simulation for you and then tell you what the result is. But maybe you don’t want to tell us your material design, given that it might be proprietary. And so the question is, can you perform such simulations and can you use the remote quantum computer in such a way that you don’t have to give away your proprietary design or any other secrets that you want to involve in this computation? And the quantum internet makes it possible to use a very simple quantum device, a quantum terminal, to access a remote quantum computer in such a way that this quantum computer cannot learn what you’re doing. So, it cannot learn what your proprietary material design is, it cannot even learn if you’re doing a simulation or factoring a number—it cannot tell the difference.

There are a few other nice applications. For example, one can synchronize clocks more accurately. One can keep data more efficiently in sync in the cloud. That’s maybe something that is not so obvious to you actually as a user, but you would certainly know if it goes wrong.

Let’s imagine an extreme example: let’s say that you have a million euros in the bank. And the data is, of course, stored somewhere. So, somewhere there’s a database that says that you own that one million euros. So, you can imagine that if you went to the A.T.M. to withdraw money, maybe the system crashes when you withdraw. And usually for redundancy purposes, of course, the data does not exist only in one location because, you know, if the computer burns down, then no one remembers who owns any money. It’s replicated in a few locations. But it might happen that if you don’t employ such consistency protocols, that if your system crashes during withdrawal, then computer one now says you own one million euros and computer two now says you own zero euros. So now the question is, who is correct? So, it’s a very important problem actually to keep data consistent in the cloud so that you don’t run into these kinds of issues.

I understand that one of the most important aspects of a quantum internet is that it will enable ultra-secure communications, which is obviously a huge benefit for state-actors, banks and big corporations. But what are some impacts a quantum internet might have on broader civil society?

I think keeping data consistent, for example, is not something totally big business. I think it’s very difficult to predict the future. The internet that we have today was originally meant to share some files around. And that’s great, but then you might also ask why would I, at home, ever share a file? At that point, in fact, people didn’t even have a personal computer at home, so, what are these files that you’re talking about?

So, we cannot predict all the applications that a quantum internet will have. People have used it also, for example, to cheat an online bridge game with entanglement. Which, of course, is a bit obscure but it may hint that there are many more things one can do with it. But I think if people don’t have access to it, then this will also not come.

A lot of the applications that we run on the internet today were not developed by people somewhere in the 60s where they wrote on the whiteboard and said, “these are all the applications, and now we’re going to build this thing”. But rather, there were people who were engaged with their technology and played around and wanted a social forum and to see whether it could be possible.

To begin with, I know the quantum internet you’re building will have a very limited scope but do you envision this being something that will be accessible to everyone in the future?

I certainly hope so, absolutely. I think the question is just a little bit, when? So, we’re building a small demonstration network here in the Netherlands, where we also have an effort to make it accessible for people. But that will only happen in two or three years because it’s very difficult to have something stable enough that you can begin to do that.

We also already have a quantum internet simulator. It’s a little program that you can install on your computer and you can have something like a pretend quantum internet. And we are using it, actually for a Hackathon next week, together with RIPE NCC (RIPE is the regional internet registry in Europe), and this time it’s actually a pan-European version. So, there will be a few teams across Europe—one here, one at CERN, one in Dublin and a few other places across Europe. And they’re going to basically work together on our “pretend quantum internet” to explore a few things one can do with it.

Given the lessons that we’ve learned from the development of the classical Internet, what sort of legal or ethical challenges do you think future frameworks and regulations for a quantum internet might consider? And are these unique from the challenges that are posed by the classical internet?

To be honest, I think there are other people who are more capable of answering this question. I’m a researcher, I’m not a lawyer and I’m also not a specialist in ethics. Given this position, I can give you a few issues, even though maybe I am partially critical about them myself.

On the one hand, there’s a lot of discussion about standardising various technologies. Which, of course, is very important eventually. On the other hand, I’m also a little bit critical about this because if you start to write standards too early, you constrain the development. Another aspect is the impact of having fundamentally un-tappable communication. That is a question that is maybe not even totally unique to quantum networks. Of course, only quantum networks can deliver fundamentally un-tappable communication, but it also arises to a lesser extent with existing encryption technologies that people might be using.

So, is that a good thing or a bad thing? On the one hand, it’s a very good thing because one can protect government secrets and everyone’s secrets with absolute security. But of course, security always has two sides. If you have a mechanism to make something more secure, it can in principle be used by anyone. It can be used for good, but it can also be used for bad. So that is a little bit of a trade-off between these two things.

I am personally of the opinion that you cannot stop progress. So, you can say, “I’m going to forbid this.” But then people will do it anyway. It’s not possible to forbid technology.

The reason I think a lot of us maybe have some mixed feelings about this is the sense that it’s already super hard to realise that technology. It’s already so hard! So, putting some extra barriers is a very scary thing, right?

There’s been a lot of talk recently about Google’s claim to have achieved quantum supremacy, as you know. But, of course, the reality is that for the most part, quantum computers will work in concert with classical computers, not replace them. In what ways will the quantum internet interact with or rely on existing “classical” technologies?

So, maybe to talk about the term quantum supremacy? In quantum communication, quantum supremacy has been achieved many years ago. Because any QKD implementation basically shows quantum supremacy. So, a quantum internet is not supposed to replace the classical internet but rather, supplement it with some extra functionality that you otherwise don’t have. Because, if you say I’m watching a movie on Netflix, there’s no reason why we would send it via qubits. Maybe in the far future when everything is so far advanced, we could need to do everything in one system. But in my lifetime, I don’t expect this. In all known application protocols for quantum, with a secure communication or say secure quantum computing in the cloud or everything else, you need the quantum network, but you also need to send some classical data around.

So, do the networks overlap or do they sit separately?

That’s a good architecture question. Do they follow sort of the same pattern? They don’t need to follow the same pattern on application level, not at all. On the elementary level—on the control level—whenever you have two quantum nodes that wants to make quantum entanglement, for example, they also need to be able to talk to each other classically, to synchronize.

And is this done using hardware or software?

It’s done through hardware and software, actually, on what is called the physical layer. So next to a quantum channel you always have a classical control channel but it is not visible for the user. But this sort of user-level communication classically could be done also by the standard internet and next to the quantum topology.

What kind of support has this project received on a local, national and regional level as well as privately?

We have a lot of support from the from the Netherlands, actually, both through QuTech, which is a national Icon program from the Ministry of Economic Affairs in the Netherlands and also NWO, which is like the Dutch NSF of the U.S. We also have some amount of research funding from the EU, both from the European Research Council and to a lesser extent from the quantum flagship, which is the EU initiative. We are also the coordinator of the European Quantum Internet Alliance where we work together with some other nodes in Europe.

We also have various industry engagements, for example, we work together with KPN, which is the Dutch Telecom. We also talk to a lot of parties in the classical domain. For example, The Hague Security Delta, which is sort of an umbrella organisation of 80 security companies in the Netherlands. That’s very convenient for us because we don’t have to talk to each of them individually. So that’s very valuable for us. We also talk to a few other private entities in the Netherlands. We also have relations with industry partners on the component level, for example, with Toptica who makes laser systems. Then there’s OPNT that does timing control, JPE that does stabilisation. So, this is on the component level where we work with a lot of industries to do specific things for our quantum network. We also work with industry which is more interested in the use case if you were to go to the other extreme of the spectrum. For example, with SAP, which is a German software company. With these companies, the interest is more about what you can do with the technology.

Another useful thing to mention is that there is also RIPE NCC, which is the regional internet registry of Europe. And that’s actually pretty cool for us because they’re an organisation that brings together all the large telecom operators and internet providers in Europe. They are responsible for managing the numbers on the internet and there cannot be a computer anywhere in Europe that does not have a number from RIPE. But they also do a lot of community development and education of their members.

I know you set the deadline for end of 2020 to have this completed, how’s your progress tracking now?

So, we will have one link by 2020 but we do not have the four nodes yet. We want to have three in 2021 and maybe all four in 2022.

Have there been any surprise challenges that have created this delay?

Of course, there are some technical challenges which took us longer. And, of course, there were also some mundane challenges. We have also decided we would like to deviate from the four-city plan because we would like to put one node somewhere we can physically access it. Previously, we had said we’re going to put it in Leiden, Amsterdam, Delft, The Hague. But then we were thinking that somewhere in the building in Leiden there could be a node, but that it would be in one of these KPN-style buildings where no one can go in. So, this is why we want to put one of the nodes somewhere where you can actually see it. That might happen either here [in Delft] or in The Hague or in Rotterdam, we haven’t quite decided yet. The idea is that you would really have a terminal where you can see the node, otherwise you just have to believe us, right? We tell you, we promise that the node’s over there!

Quantum Applications, Quantum International Relations, Quantum Internet

The ‘Who, What, Where, When and Why’ of a Quantum Internet


No Comments

With all of the recent hype about quantum supremacy, it’s easy to forget that quantum supremacy in communications was demonstrated years ago. One of the most exciting developments on the horizon for quantum communications is a quantum internet, which will securely transmit quantum information. Like most things quantum, the label of “quantum internet” has been slapped on to a quantum technological application, establishing a concept that is easily consumable for the masses, which helps to create the hype that keeps funding for that application flowing. The reality, as is often the case, is much more complex.

In fact, just about the only thing that scientists agree on is that the term “quantum internet” does not have an agreed-upon definition. That is because the technology required to manifest this reality is still in its infancy. Scientists around the world are working hard to change that. Perhaps the most well-known is Stephanie Wehner of Delft University of Technology. In preparation for the release of Project Q’s interview with Wehner on this topic, we reflect on the current stage of global development of a quantum internet.

Since 2004, the security afforded by quantum communications has been proven superior by a method known as quantum key distribution (QKD). QKD is a system employed to produce and distribute a secret key that can then be used to encode classical information. This method has since been employed by a number of actors across both private and public sectors, including banks and national security networks. It does not, however provide a secure link by which quantum information can be transmitted. Enter one important motivation for a quantum internet: to create a network of quantum nodes that enables the secure transfer of quantum information. Of course, there are a diversity of useful applications for such a network and many more still which will develop as the technology matures. One needs only to recall the history of the classical internet, for which the first projected use-value was extremely narrow, to imagine the breadth and depth of applications that will surely follow once the technology is functional.

However, a salient challenge for researchers working on a quantum internet remains. Like the classical internet, a quantum internet requires a physical infrastructure in order to function. There have been a diversity of approaches to this complex problem, from diamonds to crystals and drones to satellites. For the most part, however, the emerging dominant systems rely heavily on land-based fibre-optic cables, with some major differences between them.

In 2016 China launched their quantum satellite, Micius, as part of their Quantum Experiments at Space Scale (QUESS) project. Within a year of the satellite’s launch, major goals paving the way for a quantum internet had been achieved by a multi-disciplinary, multi-institutional team from the Chinese Academy of Sciences, led by Professor Jian-Wei Pan. These ground-to-satellite quantum communication advances included the impressive feat of establishing a quantum-secure communication spanning the longest distance yet between two different points on the globe (Beijing and Vienna) via Micius. Recently, China has also constructed the largest fibre-based quantum communication backbone, known as the Beijing-Shanghai quantum link, which stretches a distance of over 1,200 miles. However, while the link is already in use by some of China’s biggest banks to transfer sensitive data, it is not fully quantum-secure (more on that shortly).

While we have known that quantum communication is theoretically possible for some time, China has been the first country to focus its research apparatus on the challenge, building the first dedicated, large-scale infrastructure for the task. From a security perspective, this is a strategic move on China’s part. The focus on quantum communications is a pre-emptive defence mechanism to combat U.S. advances in the quantum computing space. Regardless of the development of computers, which will be capable of hacking any classical communications, a quantum-secure network will be act as a safeguard against prying eyes and ears. As a result, China continues to be a world leader in this space. However, Europe is hot on its heels and lining up to take the cake for the next big development in quantum communications: creating a functioning quantum internet.

You may have heard of the work being done to build a quantum network in the Netherlands by a team of researchers at the Delft University of Technology. Much like China’s Beijing-Shanghai quantum link, the Delft team is constructing a link between four major cities in the Netherlands, stretching from Delft to Amsterdam.

The main difference between the China quantum link and the one being built by Wehner and her team is that the Chinese infrastructure, while greatly improving upon most current cybersecurity capabilities, is still susceptible to hacking. Theoretically, a genuine quantum link will provide un-hackable connection across large distances. The Chinese system relies on 32 nodes across the link in order to transport quantum information, which is carried in photons, or light particles. Each of these nodes is susceptible to hacking because they serve as points where the information must be decrypted and then re-encrypted before the information continues its journey along the link. The system was constructed in this way because quantum information carried in photons can only travel through about 100 miles of fibre-optic cable before it begins to dim and lose data.

A solution to this problem, which Stephanie and her team have incorporated into their design from the outset, and which the Chinese team is beginning to work with as they improve their own link, is the use of quantum repeaters. This is how they work:

A quantum repeater essentially serves the same purpose as an ordinary relay node, except it works in a slightly different way. A network using quantum repeaters is shaped more like a family tree than a linear chain. In this family tree-shaped game of telephone, the quantum repeater is the parent who distributes identical pairs of quantum keys between two children, therefore doubling the possible distance between users. Moreover, these “parents” can also have their own “parents,” which can then double the key-sharing distance between the children at the bottom for every extra level created atop the family tree. This in effect increases the distance a quantum message can be sent without ever having to decrypt it.

An illustration of the type of quantum network being built by the Delft team.

Alongside their use of quantum repeaters, which provide an infrastructure to teleport the quantum entangled information across the link, the Delft team incorporates the use of quantum memories as an essential element in ensuring the information’s hyper-secure journey. Quantum memories store the entangled information in between the repeaters. They are critical because they enable the network to store the quantum information while the next entangled link is prepared, rather than measuring it and thus potentially destroying it. A system enabled by quantum repeaters and quantum memories eliminates the need to incorporate weak security points in the system where the quantum information is decrypted and then re-encrypted, or potentially destroyed.

Though significant challenges remain for researchers working to build a quantum internet, international efforts become more and sophisticated with each passing day, bringing the world closer to potential quantum network connectivity. While it is being built to supplement certain capabilities of the classical internet, some believe that eventually, the quantum internet will even overtake the classical. Most agree, however, that this will not be a reality even in our lifetime. After all, as Wehner commented in a recent interview with Project Q for our upcoming publication, you don’t really need a quantum internet to watch Netflix.

Tune in next week to read our exclusive interview with Stephanie Wehner, where she updates us on the project’s advancements, answers questions about future use-values for a quantum internet and addresses the challenging ethics of building a network that will enable un-hackable communications.

Quantum Computing

All Hype and No Game? Google, IBM, Preskill and Quantum Supremacy


1 Comment

Feature image: IBM’s System One quantum computer, via The New York Times.

Words are important. The language we use to describe something creates a discourse around it, endowing it with a life of its own and often producing meaning beyond the definition of the words themselves. Like many expressions in the emerging technology space (think: disruptive innovation, IoT or even artificial intelligence) the expression ‘quantum supremacy’ has been over-used and misused to a point where the original relevance of the term has been buried under a mountain of hype. Has this expression been transformed into an empty buzzword through this process—like a game of ‘telephone’, its meaning distorted along the way? Now, with Google’s proclamation of achieving quantum supremacy officially published, criticism has been flooding in; not only with regard to the claim itself but more fundamentally, of the usefulness of ‘supremacy’ as a benchmark at all.

Just last week IBM, one of Google’s main competitors in the quantum space, posted a critique of Google’s shrouded announcement on their blog. The article, penned by Edwin Pednault, John Gunels and Jay Gambetta—leaders and members of IBM’s quantum research and development team—urges that Google’s achievement should not be misconstrued ‘as proof that quantum computers are ‘supreme’ over classical computers.’ The team takes issue with the way in which ‘quantum supremacy’ has come to imply that quantum computers will one day reign ‘supreme’, replacing classical computers. They assert that the future of quantum computing will be intertwined with classical computing, the two uniquely-suited systems will work in concert with one another.

IBM’s statement adds further complexity to the situation, citing that Google may not have even achieved quantum supremacy at all, according to John Preskill’s original definition of the term, ‘to describe the point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful.’ The point of contention: Google has stated that their Sycamore processor was able to compute the answer to a specific problem in about 200 seconds, a task they claim would take 10,000 years to compute on the world’s most powerful supercomputer. IBM argues, however, that the comparison does not account for any of the unique capabilities of classical computing, which if properly leveraged, could bring that number down to just 2.5 days. While that’s a significant difference in estimation between Google and IBM, worthy of headlines itself, it should be noted that neither team has actually tested the calculation on a supercomputer yet.

Whether or not the disparity in time to compute between quantum and classical is measured in days or years, the fact remains that what Google demonstrated on their quantum processor can also be done on a classical computer, albeit much more slowly. Still, as Preskill commented in a recent interview for Quanta Magazine on the subject, Google’s achievement is significant in that it demonstrates the team understands the hardware they have built and that it is working. When stripped of the hype associated with the terminology, this may seem a far more lacklustre news item. Nonetheless, it is an important step forward in the development of useful quantum computing applications, which is where the real fruits of the industry’s labour will finally be tasted, across areas as diverse as healthcare, hacking, and honing.

As Project Q and others have explained in coverage of Google’s announcement, the terminology used can be misleading at best and simply a product of the media hype machine at worst. But hype has its value too. Hype can insulate against the arrival of a quantum winter—a scenario where interest and investment in quantum technologies drop off due to the technology’s failure to deliver on its promise. There is concern that a quantum winter could mean the technology never reaches the immense promise of its applications. However, private funds continue to flood the industry. According to research by Nature, ‘in 2017 and 2018, companies received at least $450 million in private funding—more than four times the $104 million disclosed over the previous two years.’ While hype contributes to potentially overstating the promise of quantum computing in the first place, it also plays a critical role in pushing the development of quantum computing forward by keeping the buzz and enthusiasm high.

It’s important to recognize the role of hype in technological progress. It is also important, however, to have access to a more nuanced understanding of the progress of quantum development, to dive deeper than the terminology and the hype around it. Without a doubt, the terminology used to dictate a turning point in the development of quantum computing is problematic, even according to its creator. This was also IBM’s central point in their argument: ‘we urge the community to treat claims that, for the first time, a quantum computer did something that a classical computer cannot with a large dose of scepticism due to the complicated nature of benchmarking an appropriate metric.’ Quantum supremacy, while a significant step in quantum’s development, is by definition an incredibly narrow benchmark with practically no real-world utility. However, it has value in its ability to capture the imagination of society and keep people engaged in the progress of one of tomorrow’s most promising technologies.

Quantum Theory

Quantum Superposition Bridges the Classic World


No Comments

Cracks have begun to show in one of quantum’s biggest controversies. The well-known Schrödinger’s cat thought experiment, which sought to illustrate the absurdity of applying quantum theory to the macro-physical world ruled by classical physics has been challenged by a recent advancement in quantum physics. An international team, led by Markus Arndt of the University of Vienna, successfully placed a large molecule of 2,000 atoms—the biggest object yet—into a state of quantum superposition. The advancement shows that quantum effects can be translated into the classical world, establishing the foundations for scientists to continue to demonstrate how the gap between these seemingly disparate worlds might be reconciled.

Quantum theory tells us that particles in superposition can shift between a wave-like state and a particle-state, meaning they can be in two places at once. Of course, from what is observable in the classical world, this cannot be true. If it were, our understanding of what we understand to be “real” would be challenged, opening the door for a whole host of quantum weirdness that classical theory keeps at bay. Essentially, as Schrödinger tried to prove with his thought experiment, if quantum mechanics is reflected on a macro-physical scale, it signifies that human beings could also exist in two places at once. It does not take long for this information to snowball into theories of time travel and multiple worlds, both of which find basis in quantum theory.

On a fundamental level, the new work published in Nature illustrates that the multi-state paradox of quantum mechanics, known as superposition, functions on a larger scale than previously demonstrated. In theory, we already knew this to be true, but the experiment proves it at the largest scale yet, having only been demonstrated previously using the smallest possible particles; atoms, photons and electrons. The experiment used by Arndt and his team, essentially a souped-up the double slit experiment, has been used regularly since 1801 in quantum mechanical experiments to observe the effects of superposition.

The simple experiment involves particles of light (photons) beamed toward a barrier with two slits in it. On a screen behind the barrier, the effects of quantum superposition are displayed in the form of what is known as an interference pattern. It looks something like this:

This striped pattern that results is interesting, as one might assume that a single beam of photons would produce a representative pattern of a solitary line, indicating their fall along a single path. However, the striped pattern that is produced shows that all of the photon’s possible paths are taken and eventually interfere with each other, suggesting the particle in fact also acts as a wave. This describes the probabilistic nature of quantum phenomena, challenging Einstein’s famous claim that “God does not play dice with the universe”.

In order to pull their super-sized version of this experiment off, the international team had to create not only the perfect environment but also synthesized the massive molecule itself in order to ensure it met the requirements for complex quantum activity to occur. The team built a custom interferometer—which, as the name suggests, is a tool that works by merging two or more sources of light in order to create an interference pattern—called the Long-Baseline Universal Matter-Wave Interferometer (LUMI). The team’s LUMI also beats a record: it is the longest interferometer ever built, with a baseline length of 2 metres. Use of this specialised machine permitted the researchers to fire the beam of heavy molecules (some more than 25,000 times the mass of a hydrogen atom) at the multiple-slit apparatus and observe the resulting interference pattern, confirming the molecule’s state of superposition.

With records being broken in the quantum space with what feels like near-weekly regularity, this advancement especially marks a unique turning point in the disagreement between quantum mechanics and general relativity. These two frameworks we use to understand the world around us have come as close to being bridged as ever before. While the success of this experiment does serve to wedge the door open for a number of seemingly bizarre theories like time travel and multiple worlds, it is doubtful that human beings or planets will be time traveling through multiple realities any time soon, if ever. However, this new, scalable research pushes the limit that scientists seek in quantum interactions of superposition further along, enabling and encouraging future research to continue to explore these limits.

Quantum Applications, Quantum Computing

Transforming Drug Development: A Critical Role for Quantum Computing


No Comments

Feature image via Nature

With news of Google’s possible achievement of quantum supremacy, quantum computing’s promise in a diversity of fields grows ever-more tangible. Drug discovery is just one of a number of areas in which quantum computing is expected to play a disruptive role. On average, it takes over ten years and billions of dollars to bring a potentially life-saving new drug to market. Quantum computers promise to revolutionize the currently expensive, difficult and lengthy process of drug discovery and development, by expanding the search for new chemicals to treat some of the world’s most deadly diseases, speeding up the creation of new drugs and cutting the costs of their development. At this prospective turning point in the advancement of quantum computing, Project Q takes stock of quantum applications in drug research and development.

Currently, researchers rely on computer models and simulations (M&S) to analyse how atoms and molecules behave, in order to develop drugs that will have optimal positive effects and minimal harmful ones. However, while of critical value to this process, today’s M&S tools quickly reach their limits of utility in the complex and computationally intensive process of molecular simulation. The goal of molecular simulation is to find a compound’s most stable configuration, known as its ground state. In order to do this, researchers use M&S systems to simulate the interactions between each of that compound’s electrons, in each atom, in order to test how they will react to one another. This is a fairly straight-forward task, as long as the molecules being tested are simple enough. However, even today’s most powerful supercomputers are only capable of simulating molecules of up to a few hundred atoms, limiting their calculations to only a small fraction of all chemicals that exist.

For a whole host of larger molecules that could be used to make new, life-saving drugs, researchers currently have no better option than to approximate how a molecule may react and then test its behaviour in trials. This process is incredibly inefficient and about ninety percent of drugs that do reach clinical trials fail during the first phase. Adding to this complexity, M&S methods are unable to calculate the quantum interactions that contribute to determining the characteristics of a molecule. A technological update in drug discovery is long-overdue.

Ultimately, the main technological limitation facing drug research and development today is that classical computers lack efficacy in what is known as optimization problems—finding the best solution by testing all feasible solutions—a process which is incredibly time and energy intensive. Quantum computers, in theory, are extremely good at optimization problems. This is due to their ability to leverage parallel states of quantum superposition, which enables them to model all possible outcomes of a problem at once, including the quantum interactions that happen on a particle-level. Theoretically, as they reach their promised computational capacity, quantum computers should be able to rapidly process mass amounts of data.

In 2017, IBM Q researchers achieved the most complex molecular simulation ever modelled on a quantum computer, proving the potential use-value for quantum computers in the pharmaceutical industry. The research suggests that if applied to drug discovery, quantum computers could model and test new drugs through molecular simulation far more comprehensively and much quicker than classical computers, effectively slashing the costs of novel drug research and development. Aside from empowering researchers to discover new treatments for a range of diseases, quantum computing could also help bring new drugs to trial more quickly and improve the safety of trials.

Already, innovators and researchers working on quantum applications in drug development are making waves in the pharmaceutical industry. Abhinav Kandala, part of the IBM Q team that simulated the largest molecule on a quantum computer back in 2017, has continued to push the boundaries of quantum computing in order to make it more applicable to industry, faster. His work focuses on a major challenge in quantum computing: improving accuracy. Quantum computers are still drastically error-prone in their current stage, hampering their utility for application in drug discovery and development. One of the MIT Technology Review’s 35 Innovators Under 35, Kandala has demonstrated how quantum errors can actually be harnessed in order to boost accuracy in quantum computations, regardless of the number of qubits. General advancements in quantum computing like this could help to bring the benefits of quantum computing to industry sooner.

There are number of young companies emerging in the pharmaceutical research space, looking at the computational boost and projected accuracy that quantum computing could lend to a range of challenges in diagnostics, personalised medicine and treatments. As quantum computers are not yet advanced enough to stand alone, most of these global start-ups rely on a blend of emerging and classical technologies. Especially prominent is the blended technological approach in machine learning and quantum computing, a topic we have previously explored here.

Another of the MIT Technology Review’s 35 Innovators Under 35, Noor Shaker leads a company that is harnessing these two e(merging)-technologies in order to speed up the creation of new medicines. Her company, GTN LTD, is producing technology that layers the processing power of quantum computing with machine learning algorithms to sort through mass amounts of chemical data in search of new molecules that could be used in disease treatment and prevention. Using this method, GTN (a Syrian female-run company) hopes to build a critical bridge in healthcare that could help to lessen the gap in access and quality of healthcare for people living in developing countries. GTN LTD’s application of these two technologies is just one example of the numerous ways in which they could be used to create and spread benefit across global healthcare systems.

Machine learning projects are already being implemented as part of a growing trend in digital healthcare, providing a helpful starting point for discussion of how other emerging technologies like quantum computing could also impact the sector. A recent article in Nature explores how even the most well-meaning of artificial intelligence (AI) applications in healthcare can lead to harmful outcomes for certain vulnerable sectors of society. The examples investigated by author Linda Nordling demonstrate the need to apply a careful social-impact and sustainability methodology throughout the process. As Nordling explains, many machine learning-based projects in healthcare can reinforce inequalities rather than help to level the playing field, if equity is not a factor that is thoroughly considered and addressed throughout the entire research and development process.

Of course, every technology is different. The challenges confronting AI applications in the healthcare sector may not translate directly to the risks that quantum computing could pose. However, there are certainly lessons to be learned. For all emerging technologies, there is equal potential to help lessen the gap between the rich and the poor as there is to widen it. The direction of development toward the helpful or the harmful hinges on many factors, including accountability and regulation. Fundamentally, the incorporation of a methodological focus on equity and inclusion, from the inception to the employment of an emerging technology, is critical.

The application of quantum computing in drug discovery is no exception to this rule. The development of this emerging technology, both alone and in concert with other technologies, has the potential to make a significantly positive impact on society. With proper care taken to ensure ethical research, development and application, the trickle-down effects of the quantum revolution could improve the lives of many. It is thus imperative that we seek to understand the impacts that quantum computing could have in the pharmaceutical industry if we want to ensure its potential to help discover cures to intractable diseases like cancer and Alzheimer’s becomes a benefit that is distributed equitably across the globe. This is not a problem for quantum to solve, but for society.

Uncategorized

Creating Space for Informed Democracy


No Comments

Nicholas Cage replaces Tom Hiddleston in a deepfake of Thor Ragnarok. Image Credit: ABC News

“I have a foreboding of an America in my children’s and grandchildren’s time – when the United States is a service and information economy; when awesome technological powers are in the hands of very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish what feels good and what’s true we slide without noticing, back into superstition and darkness.

We’ve arranged a global civilisation in which the most crucial elements – transportation, communications and all other industries, agriculture, medicine, education, entertainment, protecting the environment; even the key democratic institution of voting profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces.”

So wrote the famed scientist Carl Sagan in 1995. Almost a quarter of a century later, here we are in the fallout of his foresight. Around the world, the open information systems we rely on in democracies have been degraded by the pace of scientific and technological acceleration, challenged by globalisation and weaponised to erode the space for informed societal debate.

So, what can Australia do? If the 2016 US elections were the canary in the coalmine that revealed systemic weaknesses in democracy’s information systems, what can be done to repair and renew them?

Democracies’ information systems support spaces for informed debate and disagreement to drive decisions that positively advance democracy, from policy issues and voting in elections, to corruption investigations and the exploration of new governance concepts.

The openness of democracies’ information systems leaves them vulnerable to information attacks which can create feedback loops of self-reinforcing and damaging expectations that undermine rules, institutions and society. The primary motivation of information attacks is to exacerbate pre-existing divisions, sow mistrust and flood the space for informed debate so that it becomes a mechanism for dysfunction and damage to society. Examples of this include the Russian Internet Research Agency’s inflaming of racial tensions, to the Chinese government’s use of fake social media accounts to attack Hong Kong protesters.

As Bruce Schneier and Henry Farrell have opined, this is Democracy’s Dilemma: the open forms of input and exchange that it relies on can be weaponized to inject falsehood and misinformation that erode democratic debate.

We need to map and evaluate these new vulnerabilities to ensure democracy’s core functions are resilient in world that will only become more interconnected with the 4th industrial revolution.

Injecting falsehood and misinformation into democracy is not a new vulnerability. However, the methods used to mount attacks against open information systems has widened. The weaponisation of social media, automation, machine learning, the internet of things and soon quantum computation (which recently may have achieved supremacy) are and will continue to make attacks cheaper, easier to scale and more deniable.

When citizens make political choices in a democracy, they rely on synthesising information from different people and sources to come a decision. Online, that information does not always flow freely and cannot be counted on to be authentic.

If the space for informed debate is compromised or undermined by attack, whether it be a parliament, newspaper, departmental inquiry, court of law or public submissions process, three things occur:

The first is the destabilisation of common political grounds for disagreement. If climate change isn’t real, smoking doesn’t cause cancer and vaccines do not ensure children survive preventable illnesses, factually informed premises for debate are lost. This inhibits the ability to solve these environmental and public health challenges, by inducing false definitional doubt and semantic paralysis.

The second is information attacks which rely on manipulation, exaggeration and disinformation require a more nuanced response, different from warfare’s blunt concepts of deterrence and defense and counter-attack. The resilience and quick restoration of the space for informed debate is far more important. It lessens the damage to other societal decisions affected by the disruption and re-establishes the integrity of flows from which to gather information for a response. This does not rule out counter-attack as an option but in an age when no commonly agreed form of cyber-deterrence exists, the creativity democratic debate allows to find a long-term solution which neutralises attackers should remain paramount.

The third is more subtle. It may be that the structure of the network itself can skew decision-making, corrupting the process leading to a decision. As a recent study from researchers at MIT, University of Pennsylvania and the University of Houston revealed, social media networks can skew bias in collective decision-making by margins of greater than 20 percent. The team conducted 100 online experiments with more than 2,520 human subjects and modelled real world gerrymandering on platforms such as Twitter to test the effects of information gerrymandering. When two parties are of equal sizes and each player has the same amount of influence in the network, a small number of “zealots” in the network can utilise their influence to disproportionately sway collective decision making. The researchers found that social media platforms are particularly vulnerable, because they allow users to block dissenting voices and create filter bubbles, while providing adversarial actors with the anonymity to exploit people through precision tailored messages based on their profiles. This demonstrates that new online platforms may not be suitable for the high-quality informed debate democracy requires.

In addition to these issues there are aspects of this problem which complicate the response.

It is necessary to acknowledge that internal threats are just as dangerous as external ones. In the 2019 Indian general election, the majority of disinformation that invaded public media streams was generated by cyber armies of volunteers supporting the major parties, including the victors, the Bharatiya Janata Party (BJP). These armies spread false stories through popular Indian media such as Whatsapp. In one instance, the “Telangana manifesto” which falsely purported to be a document demanding a Muslim only Congress  was spread as a strategy to exacerbate Hindu-Muslim tensions benefitting the BJP’s Hindu nationalist platform. This reveals how internal checks and balances are just as important as preventing external threats to restrain political parties from engaging in information attacks which undermine their own democracy for political gain.

The second aspect is the complexity of the optimisation problem faced by global social media platforms. When building a piece of hardware like a television, it is possible to design them to each country or region’s safety standards, like the use of different power plugs.

Alex Stamos Tradeoffs

Image credit: Alex Stamos

When designing a global social media platform however, the trade-offs between options become nonlinear and unpredictable at scale. The diagram above shows the tradeoffs of democratic values vs. economic drivers, which social media platforms decide for hundreds of democratic and non-democratic jurisdictions everyday. These decisions currently exist beyond democracies’ capacity and power to decide. However they are not immune to public outcry, as seen after the recent mass shootings in New Zealand which forced Facebook to change its livestreaming rules.

In a world where information attacks are cheap and virality trumps veracity what potential solutions to improve democracy’s resilience can Australia consider?

Including information attacks that compromise democracy in Australia’s cybersecurity policy and legal frameworks is a necessity. Government guidance on measures to prevent, identify, expose and mitigate information attacks requires investment, as does updatable education programs for government and citizens on how to spot and expose information attacks to enhance societal resilience. This is a basic function of maintaining trust in information.

Delineating responsibility is also key. In the last Federal election, the Australian Electoral Commission was tasked with identifying disinformation in the media, despite not being equipped with the capability, budget or enforcement powers to police and investigate all media and social media platforms. Breaches identified came under breaches of electoral advertising and punishments for malicious actors were negligible. Establishing and equipping a specialist independent law enforcement team to intervene and use digital forensics to trace and arrest the offender could raise the cost of undermining democracy significantly. However, defining the boundaries of what constitutes new offences while balancing freedom of speech would require considerable thought and technical understanding by the legal community.

We must also invest in thinking about the policy implications of new technology issues for democracy. From combatting synthetic media such as voice cloning and human image synthesis (so called deepfakes) which can be used to sow mistrust in information attacks, to the conceptual tradeoffs and power imbalances between large global tech companies and democracies; the Australian government needs an independent multidisciplinary unit that can consider the operational and strategic implications of these issues. The United States once had an Office of Technology Assessment which assessed technological advances and translated their implications for policymakers. A similar model which considers all of society effects could be useful.

In order to face significant societal headwinds such as climate change, geopolitical competition and economic reform, Australia needs spaces where its citizens can safely disagree, test solutions and evolve policy through informed, authentic fact-based communication. Acknowledging the limits of online spaces, testing new ones and protecting information flows from attacks designed to undermine democracy will be crucial to the country’s success.

Quantum Computing

Google Achieves Quantum Supremacy?


No Comments

Feature image via Inverse

By James Der Derian and Gabriella Skoff

Fittingly, the first news flash came from a Google Scholar alert, by way of a post on the NASA website: Google had achieved ‘quantum supremacy’ with its 53-qubit Sycamore processor. NASA then pulled the article –  probably for lack of proper vetting – but the quantum resonance machine plus a few conspiracy theories were already off and running, including a claim from Andrew Yang that classical encryption was in dire jeopardy (as too are his hopes to win the nomination for Democratic President).

First coined in 2011 at the 25th Solvay Conference by Caltech theoretical physicist John Preskill, quantum supremacy signifies a critical step in the development of quantum computing, whereby a quantum computer is capable of outperforming a classical computer at a given task. This long-awaited moment among quantum researchers and enthusiasts would mark a major breakthrough in quantum computing that is likely to accelerate research and development of quantum computers. This could lend a significant boost, helping quantum computers to achieve their promise sooner. It might even give quantum sceptics a moment’s pause. It does not mean that all your emails are now readable by Google (though the NSA may be another story).

From the outset, the concept of quantum supremacy has carried a lot of semiotic baggage.  For some, supremacy suggests a competitive race to the finish line or to the top of the charts, as when the Supremes took Motown in the 1960s. For others, the term carries the taint of another kind of race, as when white supremacists’ chants asserted racial superiority in the streets of Charlottesville, Virginia.  

It is not difficult to see why the term quantum supremacy continues to mislead today. Specifically, it signifies a very narrow benchmark of performance, demonstrating that quantum computers will be vastly better at some tasks than classical computers. However, until they scale up in qubits, achieve functioning levels of error-correction and most importantly, become more competitive in cost, it is highly unlikely that quantum computers will challenge the hegemony of classical computers in the near or mid-term. The areas in which they could eventually prove exponentially superior to classical computers include optimization, simulation, sensing and yes, encryption/decryption. If and when the relative utility of quantum computers improves, we can then begin to assess what a quantum advantage will be over classical computers. 

Although the definition of quantum supremacy comes in the neutral gray of science, Preskill’s early parsing of the achievement as either ‘really, really hard’ (perhaps possible to achieve in a few decades) or ‘ridiculously hard’ (unlikely to occur even within the century) contributed an almost biblical hermeneutic to the eventuality, of a holy grail or deux ex machina that would forever change not only how we compute but also how we would soon live in a quantum age. The flurry of claims and counter-claims over the past week only added to the super-naturalisation of quantum computing. It might be worth taking a step back, to consider what we know and what we might yet find out.

When Project Q visited Google’s quantum computer team, led by physicist John Martinis of the University of California, Santa Barbara, he told us of the team’s plans to submit its prototype device to a civilian body for testing. Last November, Google reported that ‘Bristlecone’, its 72-qubit superconducting quantum device, would be connected to NASA Ames for testing of quantum supremacy against NASA’s powerful petaflop-scale Pleiades supercomputer (in the top 25 of the world).

After a few months of silence there were unconfirmed reports that Bristlecone proved too error-prone. The team decided to downshift their efforts, instead using ‘Sycamore’, a 54-qubit chip device. According to the original source, ‘Sycamore’ passed the necessary two-qubit error rate (below 0.5%) and was able to perform (minus one failed qubit) a superposition calculation of random circuits involving 250 or 260 complex numbers in three minutes and twenty seconds—one that would take today’s most advanced supercomputer, known as ‘Summit’, around 10,000 years to perform. Following the precise meaning of the term, and once peer-reviewed (likely in the next month), Google will be able to claim that Sycamore achieved quantum supremacy over Summit. Score one for deciduous trees over geological features! But stay tuned for Bristlecone (a tree which grows on summits!).

Further readings:

In the news:

Project Q

Project Q funding renewed as quantum supremacy is announced


No Comments

Project Q has been awarded $US400,000 to complete research into the social, strategic and ethical implications of quantum technologies. Since its inception in 2015, Project Q has received $US1.2 million from the Carnegie Corporation of New York to lead world-first multidisciplinary research into the risks and benefits of quantum innovation. Now heading into its third phase, Project Q’s research is more important than ever.

“When we started Project Q the quantum revolution was generally thought to be decades away. Since then we’ve seen the pace of quantum innovation accelerate exponentially,” said Professor James Der Derian, Director of the Centre for International Security Studies and Chief Investigator of Project Q. “Just this month news leaked that Google had achieved ‘quantum supremacy’ – meaning their quantum computer surpassed the world’s most powerful supercomputers on a particular task.”

Over the past six years, Project Q has grown to become the world’s leading social sciences research project into quantum technology. Noting the novelty of the topic, as well as the traditional separation between the natural and social sciences, Professor Der Derian expressed appreciation for the foresight and support of the Carnegie Corporation of New York for a multidisciplinary investigation such as Project Q.

“One of the great achievements of Project Q is the amazing multinational network of academics, policymakers and industry experts we have brought together to inform our research,” said Der Derian. “Over 220 people have participated in the project, sharing their experience and insights, and helping us make an incredibly complex issue accessible to a broad audience.”

Project Q has made its research available to the general public through an extensive, open-source multimedia library of recorded interviews, lectures and panel discussions, featuring the biggest names in quantum physics and the social sciences.

“Our emphasis on multimedia sets Project Q apart from traditional research projects,” Professor Der Derian said. “It means that when the grant comes to an end we will have produced not only research articles, but an interactive e-book and a feature length documentary about the quantum race.”

As the third and final phase of Project Q gets underway, the project is going global. “Building on our networks within the University of Sydney, including the Sydney Nanoscience Institute and the new Sydney Quantum Academy, we are now expanding and taking Project Q on the road. We’re planning a series of boot camps, workshops and conferences in the United States, Canada, the UK and eventually Armenia, whose President is a former theoretical physicist and advocate of what he calls ‘quantum politics’.”

Whether it’s in the field of technology, politics, or international relations the quantum future is coming faster than we thought. Project Q is preparing for this exciting new world.

Quantum Theory

Why Has Science Stopped Trying to Understand Quantum Theory?


1 Comment

Feature image via BBC Future

It is practically a truism that no one really understands quantum mechanics; yet quantum theory reigns as the dominant theory of the very small just as relativity does of the very large. This remains a paradox for a theory so fundamental that it provides the basis for our current understanding of phenomena such as atom decay and why stars shine, as well as how lasers and transistors work, which are embedded in just about everything. Physicists use and rely on quantum theory to predict the outcomes of experiments but they have stopped, as Dr Sean Carroll asserts in a recent op-ed for the New York Times, trying to understand how it works. While there are many contributing factors to this conundrum, a salient inhibitor is the siloed way in which we have come to think about the discipline of science and the way that the modern academic system reflects and perpetuates this way of thinking.

The barriers to understanding quantum begin with the fact that there are some truly sticky bits of theory that simply cannot be accounted for with our existing scientific frameworks. One such example is quantum’s measurement problem: a quantum waveform exists in all states of superposition until it is observed or measured, wherein it collapses into a single position. The fundamental challenge posed by this problem is that science supposes the existence of a measurable, objective world. The centrality of the role of the observer in quantum interactions defies this assumption by asserting that reality is observer-dependent and therefore non-fixed. This idea alone confronts science in a fundamental way, requiring an interpretation of reality that holds space for the “weird” and the “strange” of quantum mechanics—something that mathematics alone has not yet been able to provide.

This issue in quantum physics ignited a deep rift across the brightest minds of physics during the mid 1920s. Albert Einstein, representing the side of the argument which rejected the proposal that the quantum world could be characterized by probabilities rather than certainties, is famously quoted as claiming, “God does not play dice with the universe”. The interpretation of quantum mechanics that prevailed is the Copenhagen Interpretation, which asserts the rather less-than-satisfying conclusion that we simply cannot know more about quantum mechanics than what we can measure using equations. Understanding the theories was thus placed in the “too hard” basket.

Still, divergent theories from quantum’s inception into the 1950’s have attempted to make sense of this phenomenon. These theories had an undeniably philosophical bent and resulted in the majority of postulators being shunned from science altogether. In 1957 for example, Hugh Everett constructed a theory to account for quantum superposition with his Many Worlds Interpretation (MWI). Essentially, Everett’s MWI proposes that for every state of an atom’s superposition in a quantum interaction, that atom simultaneously takes each potential path, creating multiple, coexistent physical realities for each state. Mainstream physicists ridiculed Everett for what they considered to be a scientifically blasphemous postulation, a fact which no doubt contributed to his transition from science to defence analytics shortly after he submitted his dissertation.

Scientists’ resistance toward a multidisciplinary understanding of a scientific problem, however, is a relatively new phenomenon. For centuries, the disciplines of science and philosophy were taken in unity. In fact, the term ‘scientist’ was not even coined until the 19th century. Before that, great names such as Galileo and Newton considered themselves ‘natural philosophers’ rather than ‘scientists’. Even Einstein, Hiesenberg, Dirac and their cohort, the fathers of quantum mechanics, were schooled in European philosophy. This deep understanding of both the “soft” and “hard” sciences influenced their way of thinking, the types of questions they posed and ultimately the theories they constructed. As such, it enabled them to think beyond the boundaries of what was generally accepted information at that time and allowed them to construct new ideas that came to be known as fact.

However, since an epistemological transformation in the 1600-1700’s, which produced the distinction of “science” as the empirical investigation of phenomena, science and philosophy have become increasingly separate disciplines. While it has been a gradual process, this disciplinary divorce has become ingrained in society with the help of most knowledge institutions worldwide, culminating in the propagation of an isolationist understanding of these and other disciplines. This poses a significant challenge to the kind of fruitful multidisciplinary thinking that conceived nearly all of science’s greatest discoveries to date.

Beyond reifying the isolation of disciplines through course structures, universities also play a significant role in shaping academic discovery by prioritising certain areas of research over others. As Carroll elaborates:

“Few modern physics departments have researchers working to understand the foundations of quantum theory. On the contrary, students who demonstrate an interest in the topic are gently but firmly — maybe not so gently — steered away, sometimes with an admonishment to “Shut up and calculate!” Professors who become interested might see their grant money drying up, as their colleagues bemoan that they have lost interest in serious work.”

This situation is compounded by the fact that the metrics by which academic researchers are hired, retained and promoted has undergone a transformation over the last half-century. During this time, research culture has been impacted drastically by the dawn of the Internet, which has enabled an open and thriving, digital research economy. At the same time, an associated shift in focus towards metrics of productivity, quantified largely through research output, has become dominant across knowledge institutions. These changes frame the pervasive expectation that academic researchers should devote a majority of their time to publishing on certain topics and in certain journals in order to remain relevant and successful. Among other challenges, this focus on publication as a distinct metric of notoriety in the academic sciences has led many to game the system, with the resultant focus on quantity of output often detracting from the quality of output.

Aside from this, this phenomenon which has become known in academia as the “publish or perish” culture—that is, the pressure on academics to continuously publish work in order to sustain and further their career in academia—has left academic scientists with little spare time for creative thinking. This modern academic environment has been lamented by Peter Higgs, the scientist who discovered Higgs Boson, who doubts he could have achieved that breakthrough in today’s current academic system:

“It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964,” Higgs said. “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.”

Explorative and imaginative thought requires both ample time and space as well as the expectation that by nature of trying new things, it is likely the researcher will encounter far more twists, turns and dead-ends than solutions. While these qualities do not fit well into the “publish or perish” framework, it is well-established that they are of critical value to innovation. Discovery demands that we challenge the very prejudices that have become ingrained in our conceptual structures. In order to do this, one must have the freedom and encouragement to shatter these, rather than be required to work within systems that reinforces them.

Artificial Intelligence, Quantum International Relations, Quantum Research

India Races Toward Quantum Amid Kashmir Crisis


No Comments

Amid troubling news of serious human rights violations carried out in India-controlled Jammu and Kashmir—including a debilitating digital blockade lasting over two weeks—Indian Prime Minister Narendra Modi signed an agreement with France for a landmark technological collaboration in quantum and artificial intelligence (AI). The Indo-French collaboration between French company Atos and India’s Centre for Development of Advanced Computing (C-DAC) will establish a Quantum Computing Experience Centre at C-DAC’s headquarters in Pune, India and deliver an Atos Quantum Learning Machine. The high technology partnership, which “advocate[s] a vision of digital technologies that empowers citizens, reduces inequalities, and promotes sustainable development”, sits upon the controversial backdrop of India’s current actions in the Kashmir crisis and presents an interesting view into the intersection of international politics and quantum technologies.

During his first term, Narendra Modi began to position India as a global technology hub, putting its innovation sector on the map by embracing international investment and collaboration. The advancements that have been made over the last five years as a result of these activities have helped to fuel India’s socioeconomic development and cement its place on the global stage as a major emerging economy with a vibrant technology sector. Now in his second term, Modi seeks to apply a digital taxation to global technology giants like Google and Facebook on their activities in India. Though this policy shift has been identified as a potential barrier to Big Tech’s incentive to contribute to India’s start-up space, Modi has nevertheless continued to cultivate a tech-forward name for his government. His “New India” government focuses on sustainable development and emerging technologies, especially agricultural technology, AI and quantum.

Within this context, India’s national quantum technology research and development capacity has blossomed at a rapid pace, especially with regard to quantum mechanical theory and theoretical physics research and software development. However, unlike the top competitors in quantum computing such as China and the U.S., India lacks a strong quantum computing hardware industry, a challenge which could be exacerbated by Modi’s Big Tech taxation policy. In order to supplement research activities in its burgeoning quantum and AI sectors, Modi has instead turned toward collaboration with international governments as a vehicle to boost domestic technological development. For example, India’s recently established fund-to-fund partnership with Japan will support over 100 start-ups in AI and IoT. Likewise, the new Indo-French partnership is a critical piece of the puzzle for India, promising to help boost its national deficiency in applied quantum computing development and help India to become a leader in the quantum space.

With international partnerships playing such a key role in Modi’s plan for the development and growth of India’s quantum computing and AI industries, there is a sense that the country’s actions in state-controlled Jammu and Kashmir are damaging its international reputation. This perspective, however, is demonstrably negated by the signing of the Indo-French bilateral agreement. The agreement, which stipulates French alignment with India as a partner in sustainable development and emerging technologies, outlines the countries’ shared commitment to “an open, reliable, secure, stable and peaceful cyberspace”. It was signed into existence even as India, the world leader in internet shutdowns, enacted a digital lockdown on Kashmir for the 51st time in 2019 alone. This data sits in stark contrast to the stated objectives of the partnership and demonstrates the separation of business from peace-building priorities on an international scale.

The Kashmir conflict, a turbulent territorial dispute between India, Pakistan and China, dates back to the partition of 1947 and has already incited four wars between India and Pakistan. Kashmir, dubbed one of the world’s most militarized zones, is of strategic value to both countries and is India’s only Muslim-majority region. The recent conflict was spurred by a series of brutal attacks and rebellions since February 2019, which ultimately led the Modi government to revoke India-controlled Kashmir’s “special status” of autonomy granted under Article 370 of the Indian constitution. Given this complex history and characterization, India’s fresh assault on the region has led many (including Pakistan’s own Prime Minister) to fear an escalation of violence that could result in a worst-case-scenario nuclear face-off between India and Pakistan.

Whether or not it is representative of the true feelings of Modi’s “New India”, Indian national media has expressed nearly unequivocal supportive of the revocation of Article 370. French comments, however, lean toward neutrality—tactfully holding the situation at arm’s length while urging for a bilateral negotiation between India and Pakistan. Regardless of the two countries coming to a peaceful resolution or not, it appears that international investment in Indian quantum and AI development shall not waver in the face of the Kashmir conflict. Ironically, as India sprints to catch up in the quantum race with the support of France and other international allies, the results of the past technological nuclear arms “race” looms heavy over the continent.