Quantum Applications, Quantum Computing

Transforming Drug Development: A Critical Role for Quantum Computing


No Comments

Feature image via Nature

With news of Google’s possible achievement of quantum supremacy, quantum computing’s promise in a diversity of fields grows ever-more tangible. Drug discovery is just one of a number of areas in which quantum computing is expected to play a disruptive role. On average, it takes over ten years and billions of dollars to bring a potentially life-saving new drug to market. Quantum computers promise to revolutionize the currently expensive, difficult and lengthy process of drug discovery and development, by expanding the search for new chemicals to treat some of the world’s most deadly diseases, speeding up the creation of new drugs and cutting the costs of their development. At this prospective turning point in the advancement of quantum computing, Project Q takes stock of quantum applications in drug research and development.

Currently, researchers rely on computer models and simulations (M&S) to analyse how atoms and molecules behave, in order to develop drugs that will have optimal positive effects and minimal harmful ones. However, while of critical value to this process, today’s M&S tools quickly reach their limits of utility in the complex and computationally intensive process of molecular simulation. The goal of molecular simulation is to find a compound’s most stable configuration, known as its ground state. In order to do this, researchers use M&S systems to simulate the interactions between each of that compound’s electrons, in each atom, in order to test how they will react to one another. This is a fairly straight-forward task, as long as the molecules being tested are simple enough. However, even today’s most powerful supercomputers are only capable of simulating molecules of up to a few hundred atoms, limiting their calculations to only a small fraction of all chemicals that exist.

For a whole host of larger molecules that could be used to make new, life-saving drugs, researchers currently have no better option than to approximate how a molecule may react and then test its behaviour in trials. This process is incredibly inefficient and about ninety percent of drugs that do reach clinical trials fail during the first phase. Adding to this complexity, M&S methods are unable to calculate the quantum interactions that contribute to determining the characteristics of a molecule. A technological update in drug discovery is long-overdue.

Ultimately, the main technological limitation facing drug research and development today is that classical computers lack efficacy in what is known as optimization problems—finding the best solution by testing all feasible solutions—a process which is incredibly time and energy intensive. Quantum computers, in theory, are extremely good at optimization problems. This is due to their ability to leverage parallel states of quantum superposition, which enables them to model all possible outcomes of a problem at once, including the quantum interactions that happen on a particle-level. Theoretically, as they reach their promised computational capacity, quantum computers should be able to rapidly process mass amounts of data.

In 2017, IBM Q researchers achieved the most complex molecular simulation ever modelled on a quantum computer, proving the potential use-value for quantum computers in the pharmaceutical industry. The research suggests that if applied to drug discovery, quantum computers could model and test new drugs through molecular simulation far more comprehensively and much quicker than classical computers, effectively slashing the costs of novel drug research and development. Aside from empowering researchers to discover new treatments for a range of diseases, quantum computing could also help bring new drugs to trial more quickly and improve the safety of trials.

Already, innovators and researchers working on quantum applications in drug development are making waves in the pharmaceutical industry. Abhinav Kandala, part of the IBM Q team that simulated the largest molecule on a quantum computer back in 2017, has continued to push the boundaries of quantum computing in order to make it more applicable to industry, faster. His work focuses on a major challenge in quantum computing: improving accuracy. Quantum computers are still drastically error-prone in their current stage, hampering their utility for application in drug discovery and development. One of the MIT Technology Review’s 35 Innovators Under 35, Kandala has demonstrated how quantum errors can actually be harnessed in order to boost accuracy in quantum computations, regardless of the number of qubits. General advancements in quantum computing like this could help to bring the benefits of quantum computing to industry sooner.

There are number of young companies emerging in the pharmaceutical research space, looking at the computational boost and projected accuracy that quantum computing could lend to a range of challenges in diagnostics, personalised medicine and treatments. As quantum computers are not yet advanced enough to stand alone, most of these global start-ups rely on a blend of emerging and classical technologies. Especially prominent is the blended technological approach in machine learning and quantum computing, a topic we have previously explored here.

Another of the MIT Technology Review’s 35 Innovators Under 35, Noor Shaker leads a company that is harnessing these two e(merging)-technologies in order to speed up the creation of new medicines. Her company, GTN LTD, is producing technology that layers the processing power of quantum computing with machine learning algorithms to sort through mass amounts of chemical data in search of new molecules that could be used in disease treatment and prevention. Using this method, GTN (a Syrian female-run company) hopes to build a critical bridge in healthcare that could help to lessen the gap in access and quality of healthcare for people living in developing countries. GTN LTD’s application of these two technologies is just one example of the numerous ways in which they could be used to create and spread benefit across global healthcare systems.

Machine learning projects are already being implemented as part of a growing trend in digital healthcare, providing a helpful starting point for discussion of how other emerging technologies like quantum computing could also impact the sector. A recent article in Nature explores how even the most well-meaning of artificial intelligence (AI) applications in healthcare can lead to harmful outcomes for certain vulnerable sectors of society. The examples investigated by author Linda Nordling demonstrate the need to apply a careful social-impact and sustainability methodology throughout the process. As Nordling explains, many machine learning-based projects in healthcare can reinforce inequalities rather than help to level the playing field, if equity is not a factor that is thoroughly considered and addressed throughout the entire research and development process.

Of course, every technology is different. The challenges confronting AI applications in the healthcare sector may not translate directly to the risks that quantum computing could pose. However, there are certainly lessons to be learned. For all emerging technologies, there is equal potential to help lessen the gap between the rich and the poor as there is to widen it. The direction of development toward the helpful or the harmful hinges on many factors, including accountability and regulation. Fundamentally, the incorporation of a methodological focus on equity and inclusion, from the inception to the employment of an emerging technology, is critical.

The application of quantum computing in drug discovery is no exception to this rule. The development of this emerging technology, both alone and in concert with other technologies, has the potential to make a significantly positive impact on society. With proper care taken to ensure ethical research, development and application, the trickle-down effects of the quantum revolution could improve the lives of many. It is thus imperative that we seek to understand the impacts that quantum computing could have in the pharmaceutical industry if we want to ensure its potential to help discover cures to intractable diseases like cancer and Alzheimer’s becomes a benefit that is distributed equitably across the globe. This is not a problem for quantum to solve, but for society.

Uncategorized

Creating Space for Informed Democracy


No Comments

Nicholas Cage replaces Tom Hiddleston in a deepfake of Thor Ragnarok. Image Credit: ABC News

“I have a foreboding of an America in my children’s and grandchildren’s time – when the United States is a service and information economy; when awesome technological powers are in the hands of very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish what feels good and what’s true we slide without noticing, back into superstition and darkness.

We’ve arranged a global civilisation in which the most crucial elements – transportation, communications and all other industries, agriculture, medicine, education, entertainment, protecting the environment; even the key democratic institution of voting profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces.”

So wrote the famed scientist Carl Sagan in 1995. Almost a quarter of a century later, here we are in the fallout of his foresight. Around the world, the open information systems we rely on in democracies have been degraded by the pace of scientific and technological acceleration, challenged by globalisation and weaponised to erode the space for informed societal debate.

So, what can Australia do? If the 2016 US elections were the canary in the coalmine that revealed systemic weaknesses in democracy’s information systems, what can be done to repair and renew them?

Democracies’ information systems support spaces for informed debate and disagreement to drive decisions that positively advance democracy, from policy issues and voting in elections, to corruption investigations and the exploration of new governance concepts.

The openness of democracies’ information systems leaves them vulnerable to information attacks which can create feedback loops of self-reinforcing and damaging expectations that undermine rules, institutions and society. The primary motivation of information attacks is to exacerbate pre-existing divisions, sow mistrust and flood the space for informed debate so that it becomes a mechanism for dysfunction and damage to society. Examples of this include the Russian Internet Research Agency’s inflaming of racial tensions, to the Chinese government’s use of fake social media accounts to attack Hong Kong protesters.

As Bruce Schneier and Henry Farrell have opined, this is Democracy’s Dilemma: the open forms of input and exchange that it relies on can be weaponized to inject falsehood and misinformation that erode democratic debate.

We need to map and evaluate these new vulnerabilities to ensure democracy’s core functions are resilient in world that will only become more interconnected with the 4th industrial revolution.

Injecting falsehood and misinformation into democracy is not a new vulnerability. However, the methods used to mount attacks against open information systems has widened. The weaponisation of social media, automation, machine learning, the internet of things and soon quantum computation (which recently may have achieved supremacy) are and will continue to make attacks cheaper, easier to scale and more deniable.

When citizens make political choices in a democracy, they rely on synthesising information from different people and sources to come a decision. Online, that information does not always flow freely and cannot be counted on to be authentic.

If the space for informed debate is compromised or undermined by attack, whether it be a parliament, newspaper, departmental inquiry, court of law or public submissions process, three things occur:

The first is the destabilisation of common political grounds for disagreement. If climate change isn’t real, smoking doesn’t cause cancer and vaccines do not ensure children survive preventable illnesses, factually informed premises for debate are lost. This inhibits the ability to solve these environmental and public health challenges, by inducing false definitional doubt and semantic paralysis.

The second is information attacks which rely on manipulation, exaggeration and disinformation require a more nuanced response, different from warfare’s blunt concepts of deterrence and defense and counter-attack. The resilience and quick restoration of the space for informed debate is far more important. It lessens the damage to other societal decisions affected by the disruption and re-establishes the integrity of flows from which to gather information for a response. This does not rule out counter-attack as an option but in an age when no commonly agreed form of cyber-deterrence exists, the creativity democratic debate allows to find a long-term solution which neutralises attackers should remain paramount.

The third is more subtle. It may be that the structure of the network itself can skew decision-making, corrupting the process leading to a decision. As a recent study from researchers at MIT, University of Pennsylvania and the University of Houston revealed, social media networks can skew bias in collective decision-making by margins of greater than 20 percent. The team conducted 100 online experiments with more than 2,520 human subjects and modelled real world gerrymandering on platforms such as Twitter to test the effects of information gerrymandering. When two parties are of equal sizes and each player has the same amount of influence in the network, a small number of “zealots” in the network can utilise their influence to disproportionately sway collective decision making. The researchers found that social media platforms are particularly vulnerable, because they allow users to block dissenting voices and create filter bubbles, while providing adversarial actors with the anonymity to exploit people through precision tailored messages based on their profiles. This demonstrates that new online platforms may not be suitable for the high-quality informed debate democracy requires.

In addition to these issues there are aspects of this problem which complicate the response.

It is necessary to acknowledge that internal threats are just as dangerous as external ones. In the 2019 Indian general election, the majority of disinformation that invaded public media streams was generated by cyber armies of volunteers supporting the major parties, including the victors, the Bharatiya Janata Party (BJP). These armies spread false stories through popular Indian media such as Whatsapp. In one instance, the “Telangana manifesto” which falsely purported to be a document demanding a Muslim only Congress  was spread as a strategy to exacerbate Hindu-Muslim tensions benefitting the BJP’s Hindu nationalist platform. This reveals how internal checks and balances are just as important as preventing external threats to restrain political parties from engaging in information attacks which undermine their own democracy for political gain.

The second aspect is the complexity of the optimisation problem faced by global social media platforms. When building a piece of hardware like a television, it is possible to design them to each country or region’s safety standards, like the use of different power plugs.

Alex Stamos Tradeoffs

Image credit: Alex Stamos

When designing a global social media platform however, the trade-offs between options become nonlinear and unpredictable at scale. The diagram above shows the tradeoffs of democratic values vs. economic drivers, which social media platforms decide for hundreds of democratic and non-democratic jurisdictions everyday. These decisions currently exist beyond democracies’ capacity and power to decide. However they are not immune to public outcry, as seen after the recent mass shootings in New Zealand which forced Facebook to change its livestreaming rules.

In a world where information attacks are cheap and virality trumps veracity what potential solutions to improve democracy’s resilience can Australia consider?

Including information attacks that compromise democracy in Australia’s cybersecurity policy and legal frameworks is a necessity. Government guidance on measures to prevent, identify, expose and mitigate information attacks requires investment, as does updatable education programs for government and citizens on how to spot and expose information attacks to enhance societal resilience. This is a basic function of maintaining trust in information.

Delineating responsibility is also key. In the last Federal election, the Australian Electoral Commission was tasked with identifying disinformation in the media, despite not being equipped with the capability, budget or enforcement powers to police and investigate all media and social media platforms. Breaches identified came under breaches of electoral advertising and punishments for malicious actors were negligible. Establishing and equipping a specialist independent law enforcement team to intervene and use digital forensics to trace and arrest the offender could raise the cost of undermining democracy significantly. However, defining the boundaries of what constitutes new offences while balancing freedom of speech would require considerable thought and technical understanding by the legal community.

We must also invest in thinking about the policy implications of new technology issues for democracy. From combatting synthetic media such as voice cloning and human image synthesis (so called deepfakes) which can be used to sow mistrust in information attacks, to the conceptual tradeoffs and power imbalances between large global tech companies and democracies; the Australian government needs an independent multidisciplinary unit that can consider the operational and strategic implications of these issues. The United States once had an Office of Technology Assessment which assessed technological advances and translated their implications for policymakers. A similar model which considers all of society effects could be useful.

In order to face significant societal headwinds such as climate change, geopolitical competition and economic reform, Australia needs spaces where its citizens can safely disagree, test solutions and evolve policy through informed, authentic fact-based communication. Acknowledging the limits of online spaces, testing new ones and protecting information flows from attacks designed to undermine democracy will be crucial to the country’s success.

Quantum Computing

Google Achieves Quantum Supremacy?


No Comments

Feature image via Inverse

By James Der Derian and Gabriella Skoff

Fittingly, the first news flash came from a Google Scholar alert, by way of a post on the NASA website: Google had achieved ‘quantum supremacy’ with its 53-qubit Sycamore processor. NASA then pulled the article –  probably for lack of proper vetting – but the quantum resonance machine plus a few conspiracy theories were already off and running, including a claim from Andrew Yang that classical encryption was in dire jeopardy (as too are his hopes to win the nomination for Democratic President).

First coined in 2011 at the 25th Solvay Conference by Caltech theoretical physicist John Preskill, quantum supremacy signifies a critical step in the development of quantum computing, whereby a quantum computer is capable of outperforming a classical computer at a given task. This long-awaited moment among quantum researchers and enthusiasts would mark a major breakthrough in quantum computing that is likely to accelerate research and development of quantum computers. This could lend a significant boost, helping quantum computers to achieve their promise sooner. It might even give quantum sceptics a moment’s pause. It does not mean that all your emails are now readable by Google (though the NSA may be another story).

From the outset, the concept of quantum supremacy has carried a lot of semiotic baggage.  For some, supremacy suggests a competitive race to the finish line or to the top of the charts, as when the Supremes took Motown in the 1960s. For others, the term carries the taint of another kind of race, as when white supremacists’ chants asserted racial superiority in the streets of Charlottesville, Virginia.  

It is not difficult to see why the term quantum supremacy continues to mislead today. Specifically, it signifies a very narrow benchmark of performance, demonstrating that quantum computers will be vastly better at some tasks than classical computers. However, until they scale up in qubits, achieve functioning levels of error-correction and most importantly, become more competitive in cost, it is highly unlikely that quantum computers will challenge the hegemony of classical computers in the near or mid-term. The areas in which they could eventually prove exponentially superior to classical computers include optimization, simulation, sensing and yes, encryption/decryption. If and when the relative utility of quantum computers improves, we can then begin to assess what a quantum advantage will be over classical computers. 

Although the definition of quantum supremacy comes in the neutral gray of science, Preskill’s early parsing of the achievement as either ‘really, really hard’ (perhaps possible to achieve in a few decades) or ‘ridiculously hard’ (unlikely to occur even within the century) contributed an almost biblical hermeneutic to the eventuality, of a holy grail or deux ex machina that would forever change not only how we compute but also how we would soon live in a quantum age. The flurry of claims and counter-claims over the past week only added to the super-naturalisation of quantum computing. It might be worth taking a step back, to consider what we know and what we might yet find out.

When Project Q visited Google’s quantum computer team, led by physicist John Martinis of the University of California, Santa Barbara, he told us of the team’s plans to submit its prototype device to a civilian body for testing. Last November, Google reported that ‘Bristlecone’, its 72-qubit superconducting quantum device, would be connected to NASA Ames for testing of quantum supremacy against NASA’s powerful petaflop-scale Pleiades supercomputer (in the top 25 of the world).

After a few months of silence there were unconfirmed reports that Bristlecone proved too error-prone. The team decided to downshift their efforts, instead using ‘Sycamore’, a 54-qubit chip device. According to the original source, ‘Sycamore’ passed the necessary two-qubit error rate (below 0.5%) and was able to perform (minus one failed qubit) a superposition calculation of random circuits involving 250 or 260 complex numbers in three minutes and twenty seconds—one that would take today’s most advanced supercomputer, known as ‘Summit’, around 10,000 years to perform. Following the precise meaning of the term, and once peer-reviewed (likely in the next month), Google will be able to claim that Sycamore achieved quantum supremacy over Summit. Score one for deciduous trees over geological features! But stay tuned for Bristlecone (a tree which grows on summits!).

Further readings:

In the news:

Project Q

Project Q funding renewed as quantum supremacy is announced


No Comments

Project Q has been awarded $US400,000 to complete research into the social, strategic and ethical implications of quantum technologies. Since its inception in 2015, Project Q has received $US1.2 million from the Carnegie Corporation of New York to lead world-first multidisciplinary research into the risks and benefits of quantum innovation. Now heading into its third phase, Project Q’s research is more important than ever.

“When we started Project Q the quantum revolution was generally thought to be decades away. Since then we’ve seen the pace of quantum innovation accelerate exponentially,” said Professor James Der Derian, Director of the Centre for International Security Studies and Chief Investigator of Project Q. “Just this month news leaked that Google had achieved ‘quantum supremacy’ – meaning their quantum computer surpassed the world’s most powerful supercomputers on a particular task.”

Over the past six years, Project Q has grown to become the world’s leading social sciences research project into quantum technology. Noting the novelty of the topic, as well as the traditional separation between the natural and social sciences, Professor Der Derian expressed appreciation for the foresight and support of the Carnegie Corporation of New York for a multidisciplinary investigation such as Project Q.

“One of the great achievements of Project Q is the amazing multinational network of academics, policymakers and industry experts we have brought together to inform our research,” said Der Derian. “Over 220 people have participated in the project, sharing their experience and insights, and helping us make an incredibly complex issue accessible to a broad audience.”

Project Q has made its research available to the general public through an extensive, open-source multimedia library of recorded interviews, lectures and panel discussions, featuring the biggest names in quantum physics and the social sciences.

“Our emphasis on multimedia sets Project Q apart from traditional research projects,” Professor Der Derian said. “It means that when the grant comes to an end we will have produced not only research articles, but an interactive e-book and a feature length documentary about the quantum race.”

As the third and final phase of Project Q gets underway, the project is going global. “Building on our networks within the University of Sydney, including the Sydney Nanoscience Institute and the new Sydney Quantum Academy, we are now expanding and taking Project Q on the road. We’re planning a series of boot camps, workshops and conferences in the United States, Canada, the UK and eventually Armenia, whose President is a former theoretical physicist and advocate of what he calls ‘quantum politics’.”

Whether it’s in the field of technology, politics, or international relations the quantum future is coming faster than we thought. Project Q is preparing for this exciting new world.

Quantum Theory

Why Has Science Stopped Trying to Understand Quantum Theory?


No Comments

Feature image via BBC Future

It is practically a truism that no one really understands quantum mechanics; yet quantum theory reigns as the dominant theory of the very small just as relativity does of the very large. This remains a paradox for a theory so fundamental that it provides the basis for our current understanding of phenomena such as atom decay and why stars shine, as well as how lasers and transistors work, which are embedded in just about everything. Physicists use and rely on quantum theory to predict the outcomes of experiments but they have stopped, as Dr Sean Carroll asserts in a recent op-ed for the New York Times, trying to understand how it works. While there are many contributing factors to this conundrum, a salient inhibitor is the siloed way in which we have come to think about the discipline of science and the way that the modern academic system reflects and perpetuates this way of thinking.

The barriers to understanding quantum begin with the fact that there are some truly sticky bits of theory that simply cannot be accounted for with our existing scientific frameworks. One such example is quantum’s measurement problem: a quantum waveform exists in all states of superposition until it is observed or measured, wherein it collapses into a single position. The fundamental challenge posed by this problem is that science supposes the existence of a measurable, objective world. The centrality of the role of the observer in quantum interactions defies this assumption by asserting that reality is observer-dependent and therefore non-fixed. This idea alone confronts science in a fundamental way, requiring an interpretation of reality that holds space for the “weird” and the “strange” of quantum mechanics—something that mathematics alone has not yet been able to provide.

This issue in quantum physics ignited a deep rift across the brightest minds of physics during the mid 1920s. Albert Einstein, representing the side of the argument which rejected the proposal that the quantum world could be characterized by probabilities rather than certainties, is famously quoted as claiming, “God does not play dice with the universe”. The interpretation of quantum mechanics that prevailed is the Copenhagen Interpretation, which asserts the rather less-than-satisfying conclusion that we simply cannot know more about quantum mechanics than what we can measure using equations. Understanding the theories was thus placed in the “too hard” basket.

Still, divergent theories from quantum’s inception into the 1950’s have attempted to make sense of this phenomenon. These theories had an undeniably philosophical bent and resulted in the majority of postulators being shunned from science altogether. In 1957 for example, Hugh Everett constructed a theory to account for quantum superposition with his Many Worlds Interpretation (MWI). Essentially, Everett’s MWI proposes that for every state of an atom’s superposition in a quantum interaction, that atom simultaneously takes each potential path, creating multiple, coexistent physical realities for each state. Mainstream physicists ridiculed Everett for what they considered to be a scientifically blasphemous postulation, a fact which no doubt contributed to his transition from science to defence analytics shortly after he submitted his dissertation.

Scientists’ resistance toward a multidisciplinary understanding of a scientific problem, however, is a relatively new phenomenon. For centuries, the disciplines of science and philosophy were taken in unity. In fact, the term ‘scientist’ was not even coined until the 19th century. Before that, great names such as Galileo and Newton considered themselves ‘natural philosophers’ rather than ‘scientists’. Even Einstein, Hiesenberg, Dirac and their cohort, the fathers of quantum mechanics, were schooled in European philosophy. This deep understanding of both the “soft” and “hard” sciences influenced their way of thinking, the types of questions they posed and ultimately the theories they constructed. As such, it enabled them to think beyond the boundaries of what was generally accepted information at that time and allowed them to construct new ideas that came to be known as fact.

However, since an epistemological transformation in the 1600-1700’s, which produced the distinction of “science” as the empirical investigation of phenomena, science and philosophy have become increasingly separate disciplines. While it has been a gradual process, this disciplinary divorce has become ingrained in society with the help of most knowledge institutions worldwide, culminating in the propagation of an isolationist understanding of these and other disciplines. This poses a significant challenge to the kind of fruitful multidisciplinary thinking that conceived nearly all of science’s greatest discoveries to date.

Beyond reifying the isolation of disciplines through course structures, universities also play a significant role in shaping academic discovery by prioritising certain areas of research over others. As Carroll elaborates:

“Few modern physics departments have researchers working to understand the foundations of quantum theory. On the contrary, students who demonstrate an interest in the topic are gently but firmly — maybe not so gently — steered away, sometimes with an admonishment to “Shut up and calculate!” Professors who become interested might see their grant money drying up, as their colleagues bemoan that they have lost interest in serious work.”

This situation is compounded by the fact that the metrics by which academic researchers are hired, retained and promoted has undergone a transformation over the last half-century. During this time, research culture has been impacted drastically by the dawn of the Internet, which has enabled an open and thriving, digital research economy. At the same time, an associated shift in focus towards metrics of productivity, quantified largely through research output, has become dominant across knowledge institutions. These changes frame the pervasive expectation that academic researchers should devote a majority of their time to publishing on certain topics and in certain journals in order to remain relevant and successful. Among other challenges, this focus on publication as a distinct metric of notoriety in the academic sciences has led many to game the system, with the resultant focus on quantity of output often detracting from the quality of output.

Aside from this, this phenomenon which has become known in academia as the “publish or perish” culture—that is, the pressure on academics to continuously publish work in order to sustain and further their career in academia—has left academic scientists with little spare time for creative thinking. This modern academic environment has been lamented by Peter Higgs, the scientist who discovered Higgs Boson, who doubts he could have achieved that breakthrough in today’s current academic system:

“It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964,” Higgs said. “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.”

Explorative and imaginative thought requires both ample time and space as well as the expectation that by nature of trying new things, it is likely the researcher will encounter far more twists, turns and dead-ends than solutions. While these qualities do not fit well into the “publish or perish” framework, it is well-established that they are of critical value to innovation. Discovery demands that we challenge the very prejudices that have become ingrained in our conceptual structures. In order to do this, one must have the freedom and encouragement to shatter these, rather than be required to work within systems that reinforces them.

Artificial Intelligence, Quantum International Relations, Quantum Research

India Races Toward Quantum Amid Kashmir Crisis


No Comments

Amid troubling news of serious human rights violations carried out in India-controlled Jammu and Kashmir—including a debilitating digital blockade lasting over two weeks—Indian Prime Minister Narendra Modi signed an agreement with France for a landmark technological collaboration in quantum and artificial intelligence (AI). The Indo-French collaboration between French company Atos and India’s Centre for Development of Advanced Computing (C-DAC) will establish a Quantum Computing Experience Centre at C-DAC’s headquarters in Pune, India and deliver an Atos Quantum Learning Machine. The high technology partnership, which “advocate[s] a vision of digital technologies that empowers citizens, reduces inequalities, and promotes sustainable development”, sits upon the controversial backdrop of India’s current actions in the Kashmir crisis and presents an interesting view into the intersection of international politics and quantum technologies.

During his first term, Narendra Modi began to position India as a global technology hub, putting its innovation sector on the map by embracing international investment and collaboration. The advancements that have been made over the last five years as a result of these activities have helped to fuel India’s socioeconomic development and cement its place on the global stage as a major emerging economy with a vibrant technology sector. Now in his second term, Modi seeks to apply a digital taxation to global technology giants like Google and Facebook on their activities in India. Though this policy shift has been identified as a potential barrier to Big Tech’s incentive to contribute to India’s start-up space, Modi has nevertheless continued to cultivate a tech-forward name for his government. His “New India” government focuses on sustainable development and emerging technologies, especially agricultural technology, AI and quantum.

Within this context, India’s national quantum technology research and development capacity has blossomed at a rapid pace, especially with regard to quantum mechanical theory and theoretical physics research and software development. However, unlike the top competitors in quantum computing such as China and the U.S., India lacks a strong quantum computing hardware industry, a challenge which could be exacerbated by Modi’s Big Tech taxation policy. In order to supplement research activities in its burgeoning quantum and AI sectors, Modi has instead turned toward collaboration with international governments as a vehicle to boost domestic technological development. For example, India’s recently established fund-to-fund partnership with Japan will support over 100 start-ups in AI and IoT. Likewise, the new Indo-French partnership is a critical piece of the puzzle for India, promising to help boost its national deficiency in applied quantum computing development and help India to become a leader in the quantum space.

With international partnerships playing such a key role in Modi’s plan for the development and growth of India’s quantum computing and AI industries, there is a sense that the country’s actions in state-controlled Jammu and Kashmir are damaging its international reputation. This perspective, however, is demonstrably negated by the signing of the Indo-French bilateral agreement. The agreement, which stipulates French alignment with India as a partner in sustainable development and emerging technologies, outlines the countries’ shared commitment to “an open, reliable, secure, stable and peaceful cyberspace”. It was signed into existence even as India, the world leader in internet shutdowns, enacted a digital lockdown on Kashmir for the 51st time in 2019 alone. This data sits in stark contrast to the stated objectives of the partnership and demonstrates the separation of business from peace-building priorities on an international scale.

The Kashmir conflict, a turbulent territorial dispute between India, Pakistan and China, dates back to the partition of 1947 and has already incited four wars between India and Pakistan. Kashmir, dubbed one of the world’s most militarized zones, is of strategic value to both countries and is India’s only Muslim-majority region. The recent conflict was spurred by a series of brutal attacks and rebellions since February 2019, which ultimately led the Modi government to revoke India-controlled Kashmir’s “special status” of autonomy granted under Article 370 of the Indian constitution. Given this complex history and characterization, India’s fresh assault on the region has led many (including Pakistan’s own Prime Minister) to fear an escalation of violence that could result in a worst-case-scenario nuclear face-off between India and Pakistan.

Whether or not it is representative of the true feelings of Modi’s “New India”, Indian national media has expressed nearly unequivocal supportive of the revocation of Article 370. French comments, however, lean toward neutrality—tactfully holding the situation at arm’s length while urging for a bilateral negotiation between India and Pakistan. Regardless of the two countries coming to a peaceful resolution or not, it appears that international investment in Indian quantum and AI development shall not waver in the face of the Kashmir conflict. Ironically, as India sprints to catch up in the quantum race with the support of France and other international allies, the results of the past technological nuclear arms “race” looms heavy over the continent.

Quantum Internet, Quantum Research

Quantum Teleportation: Paving the Way for a Quantum Internet


No Comments

Last week’s big quantum news centred on two proof of concept studies, both of which claim to have achieved quantum teleportation using a tripartite unit of quantum information called a qutrit, for the first time. While quantum teleportation has been demonstrated previously, it has only been carried out with qubits, which are capable of storing less information than qutrits but thought to be more stable. The novel feat was achieved independently by two teams, one led by Chinese physicist Guang-Can Guo at the University of Science and Technology of China (USTC) and the other, an international collaboration headed by Anton Zeilinger of the Austrian Academy of Sciences and Jian-Wei Pan of USTC. While both teams have reported their results in preprint articles, the article by the Austrian-led team has been accepted for publication in Physical Review Letters.

Competition for credit of this achievement aside, the team’s findings ultimately support each other in substantiating an advancement in quantum teleportation theory: namely, that quantum networks should be capable of carrying far more information with less interference than previously thought. This advancement—like many in the world of quantum—is likely to be found most exciting for physicists, evading the grasp of an applied significance for those of us with less scientific minds. Nevertheless, the notion of quantum teleportation has once again grabbed headlines and imaginations, providing a good opportunity to explore the concept and the applied significance that advancements like this might eventually have on our world.

While it may sound flash, quantum teleportation is an affair less akin to science fiction than one might imagine. On a basic level, quantum teleportation differs from ‘Star Trek teleportation’ because it is used to transmit information rather than macroscale physical objects, like human beings. This is possible because of quantum entanglement, a phenomenon of quantum physics that allows us to look at one particle or group of particles and know things about another, even if those particles are separated by vast distances. Quantum teleportation relies on entanglement to transfer information based on this shared state of being demonstrated by entangled particles. As such, quantum teleportation can be defined as “the instantaneous transfer of a state between particles separated by a long distance”.

Quantum teleportation holds the most obvious promise in the discipline of quantum communication, where its impact in secure communication was established as early as 1997. In 2017, Chinese scientists working with a team in Austria made waves with their announcement that they had achieved transnational quantum teleportation, establishing a quantum-secure connection for a video conference between the Chinese Academy of Sciences in Beijing and the Austrian Academy of Sciences in Vienna, some 7,600 kilometres away from each other. The experiment utilized China’s Micius satellite to transmit information securely using photons. Micius is a highly sensitive photon receiver, capable of detecting the quantum states of single photons fired from the ground. These photons, beamed via Micius, acted as qubits, allowing researchers in both countries to access a shared quantum key and thus enabling them to participate in the quantum-encrypted video call. Critically, should the data have been accessed by a third party, the code would be scrambled, leaving evidence of tampering for researchers at both ends of the connection.

This experiment, facilitated by quantum teleportation, proved two fundamental and impactful theories in quantum physics: that quantum communication can provide a previously unfathomable level of security and that it is capable of doing so on a global scale. Given these capabilities and coupled with the new qutrit proof-of-concept work, the realm of applied possibilities for quantum teleportation is expanding.

Aside from ultra-secure, transcontinental video conferences, one very hyped application for quantum teleportation is in the development of a hyper-fast quantum internet. Due to the entangled state of particles, information is transmitted instantaneously in quantum teleportation—faster than the speed of light. However, the transfer of this information is still required to operate within the current confines of classical communication. As such, even quantum information must travel through ground-based fibre optic cables or via photon-sensitive space-based satellites, like China’s Micius. This infrastructure is both expensive and potentially expansive, posing a formidable challenge for a global roll-out of a quantum internet. Still, these early experiments have laid the groundwork for the development of a quantum-secure Wi-Fi by putting theory to the test and producing promising results.

Currently, a team of researchers at Delft University in the Netherlands is working to build a quantum network, using quantum teleportation as the mode of transport for information between linkage points. The project, which aims to connect four cities in the Netherlands, is scheduled for completion in 2020. In China too, researchers are constructing the backbone for a quantum network to connect Beijing and Shanghai. Aside from the support of private corporations such as banks and other commercial entities, progress on the concept of both localised and international quantum networks has been spurned by pressing anxiety about global levels of cybersecurity

A critical advantage to a future quantum internet is the enhanced security afforded by quantum teleportation—the ability to create an unhackable connection. This could have serious implications for national security and could present a potential solution for many foreign surveillance and interference challenges that countries face today. For example, it is now public knowledge in the U.S. that Russia has the demonstrative ability to directly interfere with most paperless voting systems. While states are currently reticent about making changes to the current U.S. vote-casting system, alternatives are slowly being considered—from regressive paper ballot casting to progressive blockchain applications—in order to safeguard American votes against hacking efforts. Quantum teleportation could offer an interesting alternative in this space as the technology continues to develop.

Though quantum teleportation will not be transporting human beings between planets any time soon, it will play a key role in ushering in an internet revolution. While it remains to be seen exactly how that revolution will play out, it is clear that it will bring an unprecedented level of security and speed to global communications. It is also apparent that the level of interest in the secure and high-speed communications afforded through quantum teleportation is broad and deep, spanning both public and private sectors across the globe. Quantum teleportation has recently seen a number of experimental proofs, pushing the field of quantum communications to the fore of quantum development and promising to deliver a much sought-after security transformation within the decade.

Uncategorized

Andrew Yang 2020: Growing American Faith in Techno-Realism


No Comments

Feature image via The Verge

Gabriella Skoff

The U.S. 2020 Democratic Primary is well underway, exhibiting the greatest diversity and highest number of Democratic candidates in U.S. history. From career politicians to a self-help guru to a former tech executive and everything in between—this extraordinary breadth makes for an interesting array of policy positions, on topics both expected and fringe. While technological innovation, economic growth and security have all become inextricably linked to American politics in general, the Democratic Party in particular has worked to frame itself as the harbinger of technological innovation since at least the Kennedy years. This year, Democrats and Republicans have moved closer together on technology as regulating Big Tech has become a salient challenge in government. Many Democrats in this year’s Primary are also heralding the manufacture and export of green technology as part of their solution to climate change and domestic unemployment. But one candidate in America’s 2020 Democratic line-up is talking about innovation and technology in a very serious way: Andrew Yang.

Andrew Yang is a former tech entrepreneur most well-known as a champion for a Universal Basic Income (UBI) through his flagship proposal, the Freedom Dividend. He frames his argument for the implementation of a UBI with the encroaching threat of mass job loss, as automation continues to permeate high-employment sectors like retail and the foodservice industry. With close to no political experience on his resume, nearly all of Yang’s policies rely on his knack for Silicon Valley hype and focus on the promises and threats of technology. His “human-centred capitalism” platform boasts a number of compelling and futurist tech-forward solutions, including what he refers to as the “new currency” called Digital Social Credit, which would see the creation of positive social value transformed into financial capital for the individual. Though many of Yang’s ideas may sound like sci-fi realities, he seems to view technology more reasonably as a useful tool to be harnessed in America’s path forward, making his message fall short of techno-utopianism and settle more comfortably into techno-realism.

Unsurprisingly, Yang is the only Democratic candidate to make specific mention of a quantum policy, which centres around two neat “steps”, according to Yang’s 2020 candidacy website:

First, and immediately, we need to invest in and develop new encryption standards and systems, and immediately shift to using these quantum computing-resistant standards to protect our most sensitive data. This won’t be easy or cheap, but it is necessary. Second, we must heavily invest in quantum computing technology so that we develop our own systems ahead of our geopolitical rivals.

While Yang’s quantum policy has been criticized by some for being simplistic or simply an activation of a buzzword frenzy, he remains the only candidate to have one. His policy is palatably outlined as a two-step program on his website but importantly seems to suggest Yang’s commitment to government investment in quantum technologies in order to increase both defensive and offensive U.S. quantum capacities. This stance clearly signifies Yang’s opinion that having an advantage over the rest of the world in quantum technology is a strategy that will be integral to the maintenance of American hegemony. Further, his position on foreign policy, which he characterizes as one of “restraint and judgement”, is well-suited to a quantum-enhanced security capacity that, in a modern show of military might, could demonstrate U.S. technological superiority to attack and defend without putting boots on the ground.

In these still early days of the race for Democratic candidacy, Yang’s and most other’s policies lack the proper framing and focus that will be required to get the public onside. While Yang may be wearing rose-coloured glasses when he emphasizes the power of technology to solve social ills, this angle has made him stand out from an array of incredibly diverse Democratic candidates. His current popularity ratings place him in the top ten candidates, above even well-known career politicians like Cory Booker, who have had much louder voices thus far in the Democratic debates. It is clear that there is something about Yang’s message that resonates with Americans. Though his press-time has been minimal, he has achieved some obvious success in delivering his message about the promises and perils of emerging technologies and how to harness and minimize these.

It is unlikely that Yang’s tech-forward platform will be enough to win him the nomination but the fact that it has already taken him this far should tell us something. Like most populations throughout history, Americans are concerned about the influence and the impact that emerging technologies are having on their daily lives and will continue to have on their futures. Americans seem to be interested in Yang because they want someone at the helm who has the prescience to control and channel these technologies in their best interest. At least in the U.S., however, running on a political platform focused almost exclusively on emerging technologies poses two major challenges: framing and a far-reaching, futurist vision.

First, a political message framed by tech-forward policy needs to be delivered in an accessible and pressing way. Yang’s framing of job loss due to automation is, in fact, an example of how this can be done effectively. Unemployment is a very real issue to both Democrats and Republicans and Yang’s message activates and engages both of these parties while framing the issue in terms of emerging technology. Second, in order to harness the power of tomorrow’s technology (quantum computing, for example), one needs to have sights set on the future. And not just the near future, but a future that is well beyond the purview of one presidential term of four years, or even two of eight years. This requires a bold vision that can border on futurist and risks the argument of the unknown. How can emerging technologies be channelled/regulated if we don’t even really know what they will do? The reality is that most politicians speaking about technology are either unable to grasp and communicate the topic of emerging technologies or not looking far enough into the future to really be able to create proactive policy in this area.

It is perhaps the most formidable challenge in politics, getting the general population to care about far-sighted goals, and one that has rarely been achieved outside of the force of dictatorships, whether benevolent or not. Generally, most communities (bar Silicon Valley) are not going to be receptive to the message of a platform built solely around emerging technology. In contrast, most people are concerned with matters they can relate to and issues that seem far more pressing, such as employment, infrastructure or education. Technology, of course, is increasingly an imposing factor in the conversation on these and most other topics of political debate, but it is often an intangible factor that requires a certain amount of faith. Like Church and State, Technology has become inextricably and implicitly intertwined with politics. Some, like Yang, speak about it with as much fervour and conviction. A candidate with a strong ethos of technological innovation and regulation certainly has the best chance of creating a government centred on these values. This begs the questions—is mixing technology and politics a good thing? Or, perhaps a more realistic question, like the relationship between Church and State, is it even avoidable?

Quantum International Relations

Where is the Middle East in the Quantum Race?


No Comments

Gabriella Skoff

The quantum race, like the space race before it, is a technological marathon with major implications for international relations. This defining quality sets the stage upon which the race is run, presenting competitors with an opportunity to demonstrate their economic and technological prowess to the rest of the world. Perhaps even more so than in the case of the space race, the winner of the quantum race will establish the winner’s name as a world leader in the Digital Age by gaining an unparalleled, strategic security advantage. With such high stakes, countries across the globe are investing in building their quantum capacities, if not to win the race, then at least to not be left behind. While China and the U.S. are currently frontrunners, countries from Europe to Latin America are joining the race, rapidly investing in quantum technology research and development. And yet, a region that has for so long dominated discussions about international relations and security appears to be missing in the line-up. Where is the Middle East in the quantum race?

For most nations in the Middle East, quantum investment is simply not a major priority. There is a complex set of reasons, unique to each country, for why this is the case. Many countries in the region have been plagued by war and instability over the last decade—producing both a deficit of government funds and an inadequate environment for exploration and innovation to occur. As with the space race, the countries that have risen to the top of the quantum food chain have done so upon the backdrop of relative stability, growth and wealth. Several countries in the region, however, have begun to emerge as quantum competitors with increasing capacity and focus: Israel, the Gulf countries, led by the United Arab Emirates (UAE) and the Islamic Republic of Iran.

These countries share some of the conditions required for the establishment of international technological partnerships, investment and a privileged focus on innovation. The ambitions and specialities of their growing quantum programs, however, differ notably in relation to their specific geopolitical situations. Israel, a country known for its heavily U.S.-funded defence and arms expertise, is investing in quantum technologies largely for security applications. The Gulf counties, well-known for their economic reliance on oil production, are distinctly invested in the areas of innovation and capacity-building, especially in the UAE where there is a growing focus on quantum applications in the energy industry. Iran’s quantum efforts, largely directed through well-established and internationally connected knowledge institutions, are also heavily influenced by the Joint Comprehensive Plan of Action (JCPOA, commonly known as the Iran nuclear deal).

While scant information is available about these countries’ nascent quantum efforts, their momentum is growing, slowly but surely. From a geopolitical perspective, the ability to compete on the global stage with quantum technology research and development would be a critical advantage for any nation in the Middle East.

ISRAEL

The quantum race began in earnest in the early 2000’s, making Israel a late-comer to the game. Just last year, Israel’s Prime Minister, Benjamin Netanyahu, announced the country’s entrance into the quantum race with an investment of NIS 100 million in a quantum technology research fund (approximately USD 28.2 million equivalent). According to The Jerusalem Post, the ongoing project is a collaboration between the Defense Ministry’s Administration for the Development of Weapons and Technological Infrastructure (MAFAT), the Higher Education Committee and the Israel Science Foundation. The project’s aim is to support Israel’s intelligence-gathering capacity, and as such will likely focus on the areas of quantum communications and computing. This month, Israel’s Ben-Gurion University of the Negev (BGU) announced plans to pursue joint quantum research and development programs with the Israel Defense Force (IDF) and U.S. Defense Forces, along with other high-tech industry players both in Israel and abroad.

Significantly, Israel is the largest cumulative recipient of U.S. foreign aid since World War II and the vast majority of this funding, quoted at $134.7 billion total, goes directly to Israel’s defense programs. As such, Israel’s ability to support scientific research and development (R&D) with military funding has come to resemble the American system, where the Department of Defense (DoD) is one of the biggest funders of national scientific research and development. This has manifested as vast amounts of military investment in Israel’s high-technology sector. According to the OECD, the country’s gross domestic spending on R&D since 2015 has been the highest in the world. As a result, the science and technology sector in Israel is one of the most well-developed and profitable in the world.

Historically, Israel has prioritised the development of home-grown technological expertise and innovation through industry and its higher education system. The immense amount of military and defense funding coupled with highly-skilled immigration booms and an extremely economically and socially invested network of Jewish populations living abroad have enabled Israel to build a thriving, national high-technology industry. Geographically surrounded by Muslim-majority countries in a radically contentious area, Israel has maintained its space in the Middle East essentially through establishing its presence as a highly militarized nation with vital support from powerful friends. Within this context and motivated by a lack of natural resources, Israel became the leader in high technology military exports that it is today.

Given this context and history, Israel’s recent pivot in focus toward quantum technologies is no surprise. The country’s keen focus on security and defense applications currently dominates the scope of their quantum R&D programs, but civilian applications are promised as the next phase of development.

THE GULF NATIONS

While Israel may eventually corner the market in quantum computing and communications for military applications, the Gulf nations of Saudi Arabia, Qatar and the UAE have entered the race with a different aim. This year the Gulf nations launched quantum computing research groups in their respective countries with the goal of creating an ecosystem for capacity-building and ultimately, knowledge production, in quantum technologies. Already an authority in technological research in the Middle East and a rising star in research output worldwide, Saudi Arabia’s King Abdullah University of Science is well-positioned to become a regional leader in the quantum computing space. Further, Saudi Arabia’s Aramco presents existing experience with supercomputer systems, a considerable advantage in the quantum race.

Recently, the UAE has turned its focus toward forming vital partnerships with global tech giants and has been rewarded by agreements with some of the biggest names in quantum computing, D-Wave and Microsoft. Last year, the Dubai Electricity and Water Authority (DEWA) announced its participation in the Microsoft Quantum Computing Programme to develop quantum-based solutions for energy-optimization. Notably, DEWA is the first organisation outside of the U.S. to participate in the program. This suggests the UAE is taking a forward-looking approach to shift its economy and applying quantum innovation in the sustainable energy sector—a strategic pivot for a country where 30% of the GDP is directly based on oil and gas output. Last year as well, the UAE Minister for artificial intelligence enacted a partnership with Canadian quantum computing company, D-Wave, to bring the region its first quantum computer, which will be housed at the Museum of the Future in Dubai.

Broadly, these advancements are couched by the motive to ensure the ongoing production of innovation through knowledge development, a trend that is currently sweeping the Arab states. The pressing need to begin diversifying the economies of these oil-producing nations has also contributed to investment in new quantum programs. However, the Gulf countries lack the population sizes and national research budgets to compete with the rest of the world in the quantum race. For this reason, the Gulf countries are looking toward public-private partnerships as a way to develop their quantum computing sectors further, bringing vital knowledge and facilities to the region.

These recent developments appear to be at least in part a response to a widely-referenced 2019 World Government Summit report authored in partnership with PricewaterhouseCoopers (PwC) by Simone Vernacchia. The report seems to have stoked a fire, urging the Gulf countries to begin investing in quantum technologies:

If they do not, they risk missing out on the many advantages that will be on offer across every sector, and they will face an increasing threat if they fail to plan for the next generation of cyber-security…. Building up knowledge and specific skills in the field, along with preparing defensive post-quantum computing cyber-strategies, can be considered urgent priorities.

The report makes clear the risks of missing the moment to join the quantum race but also points to a number of regional opportunities for quantum innovation within the existing oil production industry, national security apparatus and diversification into new industries. It is clear from the establishment of these early partnerships between the UAE, D-Wave and Microsoft, that the report’s warnings are not being taken lightly. Rather, the advice is being heeded as essential to ensuring the supremacy of the region’s economy and continued security.

IRAN

A regional competitor to the UAE, Iran is also vying for a place in the quantum race and hoping to take the lead in quantum technological facilities and know-how in the Middle East. The country has had sights set on quantum research as a game-changing industry since around 2015 with the signing of the JCPOA. The JCPOA deal stipulates that in exchange for Iran limiting its uranium-enrichment activities, sanctions against the country imposed by six of the world’s biggest powers will be lifted. The agreement also opened a door for Iran to collaborate with Euratom, the European atomic energy community working in high technology development for nuclear power. Given the tumultuous U.S. withdrawal from JCPOA, Iran’s recent quantum efforts have focused on collaboration with European countries and the continued development of its own national capacities.

Although several sources from 2017 state that Iran has entered talks with European nations to collaborate on quantum technology it is unclear whether or not any agreements have been actualised. The Atomic Energy Organisation of Iran (AEOI) appears to be the main authority involved in negotiations to build the country’s quantum industry. While European collaboration is a nebulous topic, it appears that the AEOI has been busy at work, proclaiming the two recent victories of creating Iran’s first laboratory equipped with quantum technology research facilities and running its first photon entanglement experiment.

Iran also boasts some of the most well-developed quantum information science programs in the region. As such, the quantum literacy of Iran’s scientists and engineers is higher than in many countries that lack such long-standing research specialisations. The Islamic Republic’s two leading quantum research groups, the Sharif University quantum information group and the Quantronics Lab at the Iran University of Science & Technology are internationally renowned and well-equipped. These advantages were achieved either despite international sanctions, or in the more likely case, after they were lifted through the JCPOA. Last year, the first National Conference on Quantum Technology was held in Tehran and the International Iran Conference on Quantum Information, led by the Sharif group, is now in its sixth year. These conferences serve to bring international knowledge of the latest quantum developments to the region, helping to put Iran on the map as a contender in the quantum race.

It is still early days for the Middle East in the quantum race. The growing quantum programs in the Gulf nations, Israel, and Iran have only been formally created within the past three years and as such their outputs and impacts remain minimal. Nevertheless, these countries, which have been lucky enough to prosper in relatively stable economic and political circumstances, have seized the valuable opportunity to participate in and even to help build what is promised as the next technological revolution. While only time can tell exactly how the quantum race will pan out, this regional competition will undoubtedly open up possibilities to shift existing power dynamics not just between these quantum-empowered Middle Eastern countries, but also on an international scale.

Uncategorized

Could Quantum Computing Help Curb AI’s Carbon Footprint?


No Comments

Feature image via MIT Technology Review.

Gabriella Skoff

For the first time, the environmental impact of training AI in natural language processing (NLP) has been quantified, and the results are jarring. The lifecycle of training an NLP deep learning algorithm has been calculated in a new paper to produce the equivalent of 626,000 pounds of carbon dioxide—nearly five times as much as the lifetime emissions of the average American car, including the manufacturing process. The paper brings new life to a parallel aptly drawn by Karen Ho in her article on the research for the MIT Technology Review: The comparison of data to oil now extends beyond its value in today’s society to also encompass the massive costs it weighs on the natural environment as an industry practice. The team of researchers found that the highest scoring deep learning models for NLP tasks were the most “computationally-hungry”; far more energy is demanded to train them due to their voracious appetite for data, the essential resource needed to create better AI. Data crunching is a space in which quantum computing is expected to lend a critical advantage to deep learning. Could it also help to curb AI’s carbon footprint?

The environmental impact of data is not often discussed, due in part to its lack of visibility. Fossil fuel plants can dominate the skyline, the plumes of smoke billowing above them have come to symbolise a problematic issue for many. Documentaries have taught many of us that even cows have a surprisingly large impact on climate change due to their production of methane gas. Data centres, however, are far less ubiquitous polluters, though their impact is substantial. The global ICT system is estimated to require about 1,500 terawatt-hours of power per year. To put that into context, that’s about 10% of the world’s total electricity generation. Given that the majority of the world’s energy is still produced by fossil fuels, the biggest contributor to climate change, this represents a serious challenge that few seem to be talking about.

As computers become more powerful, their power-usage too increases. Supercomputers are known to be incredibly gluttonous when it comes to energy consumption. In 2013, China’s Tianhe-2, a supercomputer with a generating capacity of 33.9 petaflops, was one of the most energy-intensive computers in the world, using 17.8 megawatts of power. Tianhe-2’s electricity consumption is about the same amount of energy required to power an entire town of around 36,000 people. While supercomputers today are used for anything from climate modelling to designing new nuclear weapons, many of the next generation of supercomputers are being tailor-made to train AI.

The U.S. Department of Energy Oak Ridge National Laboratory’s (ORNL) Summit supercomputer is the first supercomputer to be specifically built for AI applications. Summit is capable at peak performance of 200 petaflops, establishing the U.S.’s ascent to top-player in the supercomputing world, a place only recently taken from China. The U.S. aims to reach the next milestone, building a supercomputer capable of an exaflop (a billion billion calculations per second) by 2023. The numbers speak for themselves. A future reliance on these supercomputers to train AI will result in exponentially greater energy usage, by a factor that in today’s stubbornly reliant fossil fuel society would have a severely negative impact on climate change. While there are some looking toward other power alternatives for training AI, perhaps quantum computers, which require far less power than supercomputers, could support a more energy-efficient transition for AI training.

Currently, quantum computers still use more power than classical computers because of their extreme cooling requirements. Most quantum computers use cryogenic refrigerators to operate, which are immensely energy-inefficient. As such, the vast majority of a quantum computer’s energy usage is pulled directly to the cooling infrastructure required in order to operate it. However, the advantage of this refrigeration technique is critical in quantum computing: Operating at temperatures near absolute zero enables quantum processing to be superconducting. This allows them to process information using almost no power and generating almost no heat, resulting in an energy requirement of only a fraction of that of a classical computer. According to the ONLR, “quantum computers could reduce energy usage by more than 20 orders of magnitude to conventional [super]computers”.

Quantum is expected to lend an essential boost to AI and could be used for more effective training in deep learning tasks such as NLP in the future. While the operating costs on the environment of quantum computers may be high due to their cooling requirements, novel cooling techniques are being explored, which could one-day present potential solutions for quantum’s power problem. As the AI industry continues to grow exponentially, it is imperative that its environmental impact be considered in order to direct a more responsible development of the sector. Even with the high level of operational energy usage factored in, quantum computers present a distinct energy efficient advantage over supercomputers and could be used to help curb the carbon footprint of training tomorrow’s AI.