Quantum Applications, Quantum Computing, Quantum International Relations

Quantum Policy Priorities for the 46th Australian Parliament


No Comments

Feature image from Quintessence Labs

Gabriella Skoff

Ahead of this weekend’s Australian election, Project Q presents our top four quantum related policy priorities for the 46th Australian Parliament.

The Australian government has been invested in the long-term development of quantum computing since 2000. A 2016 investment boost, to the tune of $70 million AUD over five years from business, academia and the Turnbull government, has helped solidify Australia’s position as a real competitor in what has been dubbed the “quantum race”. By relying heavily on support from the private sector and brain-power from NSW knowledge institutions like the University of Sydney’s Nano Institute and UNSW’s Centre for Computation and Communication Technology (CQC2T), Australia has become recognized as a world leader in silicon-based quantum computing research. But while financial support for quantum computing has been strong, a comprehensive strategy that prioritizes benefits and minimizes harms from these technologies is not at pace.

While Australia is not alone in this position, it is at risk of falling behind. Global quantum competitors are rapidly formalizing proactive policies, in hopes of securing a position on the world stage with this technological development. In the U.S., the government is beginning to think systematically about quantum technology development and enacting policy to match this approach. China, meanwhile, has wasted no time in the coordination and execution of its national quantum policy. The E.U. has also advanced its quantum policy approach and represents the only region to do so with equity and ethics as the backbone of these policies. Whichever party is victorious on Saturday, the 46th Australian Parliament will be presented with the challenge – and the opportunity – to introduce progressive tech policies that will not only boost industry and research, but will protect citizens and pave the way for other countries to follow suit.

For Australia to remain competitive in the quantum race and be prepared for the new reality that will form in the wake of its fulfilment, the 46th Australian Parliament should prioritize the following to create a comprehensive quantum policy:

Security

The coming age of quantum computing will result in a drastic transformation of cyber-security needs. Whether or not Australia wins the quantum race, whoever wins tomorrow’s election will need to address the reality that with the realization of a fully functioning quantum computer will come the ability to hack any system. The Australian Department of Defence is already investing in quantum cryptography, supporting Canberra-based quantum cybersecurity firm Quintessence Labs (QLabs) with AU$528,000 in funding for the further development of quantum key distribution (QKD) technology. This was the largest of eight Defence Department Innovation Hub grants in 2017. It is clear this need has already been recognized by the government in relation to national defence. What is not clear, is how the Australian government will support the keepers of Australians’ most sensitive data—including healthcare services, banks and businesses—to adapt to these new challenges. A lack of quantum cryptography preparedness across even these non-military sectors could result in dire security consequences for Australia.

The Australian Government should consider emulating the forward-looking policy approach stipulated by the European Commission’s Joint Research Centre (JRC). The JRC stresses the importance of equipping both military and non-military service providers with a plan toward implementing future, quantum-encrypted capabilities. The report urges: “Cryptography is indeed important for applications such as preventing interception of classified information, providing governmental services, protecting critical infrastructure, and in the military field. Banks and financial institutions, data centres providers, and players in the health sector can also be potential users. A home-grown industry mastering a technique that potentially guarantees future-proof communications security can hence be seen as an issue of national security.”

Chinese Collaboration

Chinese investment and influence plays an important economic and cultural role in Australia. This is a relationship that most parties have vowed to protect, enshrining it in trade and regional-relations policy. In the quantum race, however, there are concerns that Chinese collaboration on Australian quantum projects could present a national security risk. According to the aforementioned JRC report, this challenge has been identified and is being addressed by the European Commission through their quantum technologies flagship initiative.

The depth and scale of this issue has also been reported by the Australian Strategic Policy Institute (ASPI), in their report, Picking Flowers, Making Honey. The title of the report is derived from a description by the People’s Liberation Army of Chinese-Western collaboration (especially in the Five Eyes countries), as “picking flowers in foreign lands to make honey in China”. The report details how the PLA strategically deploys military researchers to universities in Western countries, obscuring their affiliation, and then brings them back to China so they can use the knowledge and information gleaned from their collaborations to further China’s own national technology development efforts. ASPI reports that this practice essentially aids Chinese military development, especially in the emerging field of quantum computing.

While it remains unclear whether or not the Australian government and domestic research institutions are informed of this practice, no action against this strategic transfer of knowledge has so far been taken. In fact, “Among universities in Five Eyes countries, the University of New South Wales (UNSW) has published the most peer-reviewed literature in collaboration with PLA scientists.”. This information should concern the incoming Australian government, as UNSW is one of the leaders in Australian quantum computing development. If Australia seeks to remain a leader in quantum computing, this is an issue that must be tackled, albeit with a delicate approach that will not impact negatively on the many positive effects of Chinese interests in Australia. According to the ASPI report, many of those who participate in this practice presented with false records, a challenge that could be tackled simply with a higher level of scrutiny over the visa application process for incoming researchers collaborating on high-value projects.

Focus on the Development of Promising Environmental and Renewable Energy Applications

Climate change is a big-ticket item in the upcoming election, and one that may play a decisive role in inducing government change. Regardless, all parties have stated a commitment to investing in renewable energy. Quantum research outside of communication and computing presents promising potential for renewables. However, quantum applications in this space require an increased level of attention and support in order to develop. The Australian government should be investing in a far broader spectrum of emerging quantum applications, such as quantum dots, which could revolutionize the solar energy industry, and quantum tunnelling, which could help to capture and transfer wasted energy. These will be the green energies of tomorrow, presenting Australia with the unique opportunity to be a global leader in this space.

Quantum Business

Already, Australia has grown and attracted a number of powerful tech start-ups and international funding, bolstering its position as a hub for quantum research. There is momentum building to make Sydney the destination for quantum investment and to cement Australia’s place as the Silicon Valley of quantum development in the Southern hemisphere. Further focus on supporting the growth and development of this ecosystem could create a competitive advantage for Australia, boosting business investment and drawing the brightest minds from all over the world to solve quantum’s biggest challenges.

This, in turn, could allow the Australian quantum industry to broaden the scope of its focus, expanding to the areas of research and development mentioned above. Government investment in building the desirability of Australia as a world-class quantum destination would not only help to attract critical private sector investment but could also serve to attract the talent that is now sorely needed.

All contenders on Saturday’s ballot claim varying levels of commitment to prioritizing issues that the coming age of quantum computing will impact, such as cyber-security, defence, innovation and science, business, energy and environment, healthcare and regional relations. Yet no party on the election ballot has explicitly mentioned a dedicated policy for the further development and adoption of quantum technologies. Australia now has the chance to produce an agile national quantum policy that could complement and support some of the most important policy agendas already being pursued. It is clear that quantum technologies will carry a number of social and economic benefits, which will require the keen attention of government representatives in order to realize their potential. As demonstrated by the actions of Australia’s global competitors in quantum development, this can be done in a number of ways. We recommend a human-centric approach that weighs the threats and benefits of quantum development with a critical eye and seeks to not only maximize the benefit of these technologies for all Australians, but also presents an example for other countries to follow suit.

Artificial Intelligence, Quantum Computing

When Quantum Meets AI: PERILS, as Two of our Future’s Most Powerful Technologies Collide


No Comments

Part III of III

Gabriella Skoff

Quantum computing and artificial intelligence (AI) are disruptive technologies. They are set to transform and rupture both industry and societal networks as they challenge and displace current systems. The powerful evolution that will be ignited across disciplines if and when these technologies become fully functional and cohesive is sure to challenge modern hegemony in a number of ways. Some of these changes, as we have discussed in last week’s instalment, could endow us with the tools to make this world a better and fairer place for people around the globe. Of course, the duality of light and dark also characterizes emerging technologies, and some of these changes could impose powerfully deleterious effects on society. While we have discussed some of the potentially promising applications for these technologies in the future, we also caution that there is no way of knowing exactly how these technologies will evolve, both together and separately. Industry and government cannot ensure that emerging technologies have only positive societal impacts. However, these forces certainly do have some power over the process to leverage benefits in a way that consciously produces positive impacts and opportunities to confront underlying societal challenges.

This final instalment of our quantum-AI series will caution against the type of thinking that views technological advancements as end-all-be-all solutions. Certainly, these emerging technologies present the potential for revolutionary social, economic and environmental benefits. However, if we focus only on the positive impacts these emerging technologies may have, we risk blinding ourselves to the underlying issues that they may confront but cannot solve. In this way, quantum-AI applications could produce either a Band-Aid effect or could even exacerbate certain challenges rather than “fix” them. Rather, with a human-centric approach and a forward-looking perspective, we hope these technologies can be channelled and regulated in ways that work for the benefit rather than the to the detriment of societies across the globe.

Towards a More Objective AI?

When we talk about a quantum boost to AI development it is imperative that we consider the current challenges facing machine learning today and how these might be impacted by the accelerated development of AI technologies. In last week’s instalment, we explored how quantum-powered AI might enable machine learning algorithms to create a more nuanced output from more complex data sets. While this reality may ultimately manifest, it does not guarantee that enhancing AI’s capacity to process larger and more complex data sets will fix its bias problem. That is because this problem is not wholly a technological one, but mostly a human one.

A recent MIT Technology Review article dives into this issue, citing a new report by the AI Now Institute, which explores AI’s ongoing “diversity crisis”. The report presents some staggering figures with regard to gender and racial diversity in the AI industry and argues that there will be no solution to AI’s bias problem without a monumental shift in these demographics. Across both industry and academia, the percentage of Black and Latino contributors to the field of AI is no higher than 5% and the number of non-male contributors sits below 20%. With such a homogenous group of people writing AI algorithms, it is no wonder a bias exists in the technology.

Some powerful examples of the negative social impacts that have been produced through the application of AI are given in the aforementioned article: “Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumésperpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.”. Clearly, these biases are not harmless and have already manifested negative social impacts across a number of sectors. This is a challenge that quantum-boosted AI will not be able to confront. If we believe that a higher-powered AI that is able to process vast amounts of data will alone solve AI’s bias problem we are severely mistaken. Ignorance of these underlying, systemic issues will only serve to deepen divides and reaffirm inequalities.

This debate challenges the misconception that technology exists independent of society. Originating in the early nineties with the advent of the Internet, the idea that cyberspace would be an egalitarian realm free from notions of race, gender or identity is a belief that is still cemented in the human psyche. Though it is increasingly challenged today, the results of this belief are instituted throughout the culture and practices of the technology sector. Long-term systemic change is what will be required to displace this dynamic: From a shift in male-dominated workplace culture to hiring, retention and workplace support practices for racial and gender minorities. The journey to diminishing AI’s bias problem is inherently tied to the achievement of far more diversity and better representation in AI technology development.

Impacts in the Medical Sector

Quantum-powered AI technologies promise to revolutionize healthcare systems across the globe in a number of ways. But will the further automation of healthcare services be a benefit or a detriment to patients?

Recent years have seen health care systems across the globe move towards an approach that seeks to maximize efficiency. This results in patients receiving less of their doctor’s time, which is instead increasingly demanded by their computer’s reporting and diagnostic systems. Likewise, changes in insurance policies over the last decade have led to the standardization of less patient-doctor contact. A 2017 systematic review of 67 countries notes that already, “For half of the world’s population, primary care doctor visits last less than five minutes.”. This, in turn, and coupled with the boom of Big Pharma, has resulted in most doctor’s visits ending with a prescription in hand, at least in the United States.

Certainly, in the United States, the prescription of drugs is the most viable outcome from a GP visit where the patient is only allotted an average of 15 minutes of the doctor’s time. This reality is already a serious challenge that has no doubt created negative societal impacts. One such example is the vicious opioid crisis currently plaguing the country, due largely to the over-prescription of opiates. While this is only one example from one country, it illustrates that the situation that has arisen in primary healthcare across the globe is extremely complex and warrants specific consideration before current systems are advanced with new technologies.

Quantum-AI technologies promise to further increase the efficiency of these systems, but there is sufficient reason to pause and ask ourselves what role this may or may not play in actually improving health outcomes for patients. The increased automation that may result from the confluence of quantum computing and machine learning could serve to further entrench healthcare systems in this fast-food style of primary care. It is not difficult to imagine how this could influence a further reliance on the diagnosis of medical conditions linked to an automated prescription of drugs. This could only serve to move us further away from a holistic approach to healthcare, in which doctors spend more time trying to understand a patient’s specific circumstances in order to recommend not only drugs but also lifestyle changes in areas such as exercise, diet and sleep.

In order for the future of quantum-AI systems to function in harmony with human practitioners, a deep understanding of the intricate complexities of the current state of healthcare and the directions in which it is moving is entirely necessary. Likewise, other considerations of value when debating the topic of the role of these emerging technologies in healthcare systems should include: the cost viability of implementing new quantum-AI systems uniformly across healthcare providers, the difficulty a GP may have in deciphering diagnostic reasoning of AI systems, and the potential for privacy and ethical issues with regard to the storage of patient data, especially in the context of quantum encryption and hacking.

Impacts in Business and Government

Traditionally, the implementation of high technology in business has been slow to take hold. However, current trends point to a shift in this practice. According to Forbes: “For the first time ever, emerging technology investment has superseded customer understanding and the No. 1 thing firms want to do more to be innovative.”. As the pace of emerging technology adoption in business accelerates across sectors, there is a fundamental need to apply a human-centred policy approach to the implementation of quantum and AI technology in order to keep humans “in the loop”.

Technological innovation in and of itself is of little value to a business. Emerging technologies like quantum and AI do not exist in a bubble apart from society, but rather function within human-built and run systems and networks. As such, if an organisation is not flexible and adaptive enough to accommodate a mass technological shift, such as the incorporation of quantum-powered AI into financial modelling systems, the implementation may cause more problems than benefits. Again, while this challenge is specific to the implementation of new technologies, it is a people problem, not a technology problem. If the organisation itself is not adaptive enough then these implementations could be not only detrimental to the culture and wellbeing of employees but also potentially fatal to the business itself.

As with business, government systems too will need flexibility and agility in order to adopt emerging quantum-AI capabilities. Government agencies are bureaucratic and hierarchical, which can make emerging technology implementation especially challenging and time-consuming. As we have previously covered, the United States DoD has been particularly slow on the uptake of quantum technologies. When businesses and governments alike begin to consider the implementation of quantum and AI technologies, they should do so with the foresight and human-centric approach necessary in order for such system changes to take hold and function effectively. While it is challenging to regulate emerging technologies before they are fully functional in applied settings, there is a need for a strong and proactive government foresight into how quantum and AI will develop, both in tandem and in isolation of one another as well as within existing structures and systems.

Whose Role is it?

In the case of the informed and proactive regulation of emerging technologies, responsibility and control fall largely upon the shoulders of two powers: industry and government. Due to the fundamental role that the private sector plays in the development of quantum and AI, the United States provides an interesting case study. As we have debated previously on the blog, the United States’ heavy reliance on Silicon Valley as its technology development machine, especially for defence purposes, comes with a host of specific challenges and advantages.

Unlike in China, where national technology investment strategies are extremely well-funded and seamlessly incorporate the power of an utterly nationalistic and committed private sector, the United States’ approach leaves far more room for dissonance. Notoriously anti-establishment and globalist, Silicon Valley produces companies like Google, Amazon and Microsoft where grassroots individualism often manifests as a central business characteristic. In comparison to China’s strategy, this creates a contentious and sometimes tumultuous relationship of dependence between the United States Department of Defence (DoD) and these tech giants, who it is hoped will deliver on national defence contracts.

This challenge has been dubbed by those who fear this dynamic will harm United States security interests as Silicon Valley’s “techno-moralist” problem. But while this dynamic may appear as a challenge for the DoD in the race to achieve dominance over quantum and AI development, it presents an opportunity for industry to help bring the crucial lens of ethics onto the issue, which could serve a critical role in helping to regulate the development and fusion of these two technologies.

From a top-down perspective, the United States government has recently taken interest in the development of what is commonly known as ethical AI. In February of this year, a resolution, “Supporting the development of guidelines for the ethical development of artificial intelligence,”, was introduced in the White House. However, this high-level conversation around the ethics of AI has yet to extend to quantum. This is due to many factors, including the public’s lack of engagement with the topic of quantum technologies, which can be challenging to understand in the abstract. Nonetheless, the government of the United States’ policy formation around the prospective social impacts of AI applications lends hope to the development of a similar approach to quantum technologies.

The Defence Advanced Research Projects Agency (DARPA) is the agency of the  United States’ DoD responsible for funding and coordinating the development of emerging technologies for military applications. It is also the biggest funder of basic science in the U.S.. While DARPA pours billions of U.S. dollars into the development of emerging technologies, funding for research into the social impact of these is severely lacking. As we have urged previously, “social impact research has the potential to make systems run smoother, more ethically and sustainably, giving an advantage to a science and technology environment that takes a long view rather than a short-term perspective.”. Further, the DoD’s reliance on the “techno-moralists” to deliver the technological advancements needed to maintain U.S. military supremacy also presents an important argument for a shift in this imbalance. Given the context of Silicon Valley’s tech culture, this is not a debate that will be won by DoD money or muscle. Rather, it is one that must appeal to a sense of ethics. This can only be done by government demonstrating that there is a strong commitment to understanding the social impacts that the future applications of quantum-AI technologies will undoubtedly produce.

 

 

Artificial Intelligence, Quantum Computing

When Quantum Meets AI: PROMISES, as Two of our Future’s Most Powerful Technologies Collide


No Comments

Part II of III

Gabriella Skoff

Project Q seeks to perforate the silos built up around disciplines. We believe that the nature of science and technology is inherently social, and we strive to explore the ways in which these disciplines are cross-pollinated and how they intertwine. Likewise, the intersection of artificial intelligence (AI) and quantum physics, stemming from the disciplines of computer science and physics, respectively, demonstrates the value of building bridges between academically separated fields. This fusion presents benefits for both ends of the spectrum, promising to have an impact on the ways in which we understand and apply both quantum technology and artificial intelligence.

The emerging, interdisciplinary field of quantum machine learning is in a truly nascent stage of development. However, as demonstrated by the research cited in the introduction to this series, researchers are not waiting around for the full realization of a quantum advantage to begin testing these experimental waters. The results from early studies, using relatively low-powered quantum computers and machine learning systems, certainly lend support to the hype promising that together these technologies will make a big impact on our future. That being said, while these technologies are expected to affect a wide range of industries, there is no way to be sure of exactly how they will develop, both separately and in conjunction. The only surety is that there will undoubtedly be unexpected outcomes produced by the fusion of quantum and AI. Still, the big names in both quantum and AI development, including NASA, IBM and Microsoft, race ahead to develop these technologies for commercial and state-sponsored use.

Dual Promise

On a fundamental level, machine learning could contribute to the development of quantum computing itself. One emerging application for machine learning in quantum computing is in the formulation of complex quantum algorithms. This is a formidable challenge in quantum computing that could be confronted with the help of machine learning, in order for quantum algorithms to be designed and learned by quantum computers themselves rather than composed through the time-consuming effort of the few quantum experts. By using a physics-oriented approach to intelligent machines, complex, quantum-entangled experiments are already being designed by researchers working at the intersection of quantum and AI. Early examples of quantum experiments like this one, enabled by the power of machine learning, demonstrate the potential for AI to play a vital role in the development of quantum computing.

The promise of this application for machine learning in quantum computing is best summed up by Dr. Irfan Saddiqi, of Berkley’s Quantum Nanoscience Laboratory: “Can we teach a machine quantum mechanics? Can a machine learn the rules of quantum-mechanics? The answer is, absolutely.”. Saddiqi and his colleagues use recurrent neural networks (RNN), a specific type of AI algorithm that mimics the biology of the human brain to make sense of large data sets by identifying patterns from the data itself. While it should be noted that RNN cannot actually understand quantum computing in the way that a human brain can, it can produce computational results that can serve as a valuable tool for researchers. In this way, machine learning could be of important value to quantum computing by making sense of the vast amounts of data produced by quantum computers. Some predict that AI technology will eventually become a standard feature of quantum hardware for this reason.

Perhaps the most anticipated manifestation of the amalgamation of these technologies, however, comes in the form of a quantum boost to AI. Scientists are beginning to explore the question of how quantum computers could be used for intelligent data analysis. According to Canadian quantum computing company Xanadu, the potential areas of impact for quantum computing in AI include optimization, linear algebra, sampling and kernel evaluations. These four specializations of quantum computing could accelerate the development of AI to be more effective and efficient. Examples of use for these quantum capabilities in machine learning include, respectively: helping AI to evaluate hard-to-compute functions, exponentially speeding up the processing power of AI, producing data samples to train AI on, and augmenting certain functions of classical computers, such as the use of kernel methods, which can be difficult to compute classically. Today’s quantum computers are well-adapted to enhance these specific functions in AI, especially in cohesion with classical computers, as they are mostly special-purpose built devices at this stage in development.

Toward a More Objective AI?

As previously discussed, new research into the feature mapping capabilities that quantum computing could perform should enhance the sophistication of data mapping that AI is capable of. This illustrates the potential for AI powered by quantum computing to surpass its current levels of efficacy, by identifying patterns in data that are not visible to classical computers. According to IBM researchers: “AI systems thrive when the machine learning algorithms used to train them are given massive amounts of data to ingest, classify and analyze. The more precisely that data can be classified according to specific characteristics, or features, the better the AI will perform.”. A critical role that quantum computers could play in AI as their quantum volume increases, is to enable the computation of far more complex and vastly larger data sets through feature mapping. In less technical and more theoretical terms, perhaps this quantum capability will be able to influence more objective and less biased AI output, depending on the breadth and quality of data it is trained on.

The wisdom of the crowd theory, popularised in the Information Age by citizen-powered journalism sites such as Wikipedia or Reddit, underpins these website’s success in producing surprisingly accurate and up-to-date information, given that most contributors are not “experts”. As with machine learning, these websites are only as good as their data points in quality and in numbers. The wisdom of the crowd theory dictates that the larger and more diverse the pool of contributors producing information for any given article, the higher the levels of accuracy tend to be.

Quantum computing could offer machine learning a similar advantage by enabling a higher level of computational power to deal with a larger and more diverse sample of data points. It would be able to lend this advantage because the quantum properties of particles theoretically exist in a near infinite quantum state of all possible realities. The power of quantum computing lies in its ability to inhabit this quantum space, opening up the possibility for a far greater scale of computing than conventional computers can offer. With regard to machine learning, this means that patterns and features that AI algorithms look for in a data set could be sorted at a much faster pace with more efficiency than currently possible. While this theory has not been tested, perhaps quantum computing could help with AI’s ever-present bias problem, by enabling machine learning to process and identify patterns in larger and more complex data sets. Further, there is hope that this quantum-endowed capability, if realized, could allow for a more nuanced reading of data and lead AI systems to new findings.

Impacts in the Medical Sector

One domain where the ability of AI to pick up on subtler patterns from larger data sets, thanks to a quantum boost, could have an especially positive impact is in the medical field. There is significant hope amongst scientists that quantum computing and AI systems will revolutionize the medical sector through drug discovery, disease screening and by supporting clinical diagnosis and decision-making.

A controversial topic of discussion, it has now been shown that AI can make more accurate medical diagnoses and predictions than some doctors. While some may fear that this information will eventually lead to AI replacing doctor’s jobs, the far more likely outcome is that AI could function in support of them. In fact, AI is well-suited for the role of diagnosing, a task that involves scouring information for patterns and drawing conclusions from these.

Already, computer systems play a distinct role in this practice. Diagnosing systems, known as clinical support systems (CDSSs), analyse medical data to assist in clinical decision-making. AI systems powered by quantum computing could greatly enhance the efficacy and speed of such systems, providing vital support to physicians in diagnostics and leading to better health outcomes for patients.

Quantum-powered AI could also play a significant role in drug discovery. Currently, it takes on average 12 years and $2.7 billion USD to get a new drug to market. This is an issue currently being tackled by start-ups like GTN, a company that is working to combine the power of quantum computing and AI in order to cut those numbers in half. Co-founder and CEO of GTN, Noor Shaker, has vowed to solve this problem by combining the forces of these two technologies in order to improve access to new drugs, especially in developing countries.

In the case of GTN’s technology, quantum computers, which are able to conquer the complexity of chemistry problems, produce simulations of chemical interactions and feed that information into a machine learning algorithm. The algorithm can then analyse the data and search for patterns that will lead to an understanding of the compounds. According to a feature by MIT Technology Review: “Thanks to the combination of both technologies, the GTN LTD software detects new molecules that have the potential to behave like current drugs. The company’s algorithms simulate, filter and select molecules that were virtually invisible to convert them into different medications”.

Examples like this one demonstrate how the novel use of these two technologies could revolutionize the pharmaceutical industry, making the drug discovery process more efficient, quicker and cheaper. The knock-on effects of producing applications like this for quantum-AI technologies could have a significant impact on improving access to drugs and getting better drugs on the market faster, making life better for those living both in developed and in developing countries.

Impacts for Weather Modelling

The quantum-AI revolution is also set to make waves in precision weather-prediction capabilities. Accurate prediction of weather forecasting impacts a huge number of sectors across the globe. From business to agriculture to government, the ability to accurately predict changing weather patterns is of critical importance to society. Currently, the quality of weather forecasting is highly variable, dependant on the ability to collect and make sense of the mass amount of data produced by satellites. With the support of future quantum computers, AI systems could better analyse meteorological data in order to make clearer and more reliable weather predictions. This could lead to higher levels of disaster preparedness, especially in the case of more precariously-located developing countries, and this proliferation of more accurate information could help to improve more sustainable farming practices.

A far more insurmountable challenge in weather forecasting manifests in the long-term. Even on a very small scale, about 90% of five-day forecasts are accurate, compared to the approximately 50% accuracy rate for ten-day forecasts. Looking even further into the future, accurate weather forecasting is more or less impossible. Quantum and AI, however, could present a solution to this problem. By improving our ability to produce and make sense of vast amounts of data, quantum and AI systems could have a huge impact on our ability to create weather-forecast models far into the future. As we have previously covered, the improved ability to create accurate, long-term weather modelling could have a serious impact on our ability to verify the discourse on global warming, potentially prompting society to take action against climate change at a scale far more radical and quicker than what we are currently motivated to do. For many, something as intangible as climate change is only real once seen—perhaps the collision of quantum and AI in this space could endow us with that critical foresight before it is too late.

Impacts in the Financial Sector

In the banking sector, fraud detection techniques are already algorithmic. Classical computers, however, are slow at this task and require an abundance of power in order to perform the complex mathematics needed to detect fraud. Machine learning, driven by imminently more powerful quantum computing, could pick up on patterns and spot fraudulent activity with far greater ease and efficacy than what is currently possible using classical computing. Likewise, quantum algorithms based on AI systems are set to transform current trading practices through their joint optimization capacities. Quantum-powered AI systems could potentially play a strong role in identifying trading opportunities algorithmically, without the need for human input.  

Of course, any time that the need for human input is lost, panic will rise. The fear the AI technologies will cause mass job loss across a number of sectors should not be ignored, but it should not incite panic either. Understanding the duality of the potential powers and threats posed by the fusion of quantum-AI could enable proactive policy, should governments invest the effort into understanding the social impact that these emerging technologies are sure to have.

Join us next week, as we take a deeper look into the nuances of these promises and explore the ways in which improper harnessing of these technologies could produce negative effects in our world.

Artificial Intelligence, Quantum Computing

When Quantum Meets AI: Promises and Perils as Two of Our Future’s Most Powerful Technologies Begin to Collide


No Comments

Image via GCN

Part I of III

Gabriella Skoff

The game-changing role that quantum computing is set to play in machine learning is a topic of conversation shrouded in much hype. Theoretically, quantum computing could increase algorithmic capacity to crunch large datasets, enhancing the performance of deep learning and accelerating the progress of artificial intelligence (AI) much more quickly than what is currently possible using conventional computers. While scientists have theorized that the power of machine learning could be exponentially enhanced by quantum computing, it has generally been believed that current quantum technology has not yet reached the level of maturity needed to lend this essential boost. New research, however, presented by a joint team of researchers from IBM Research, MIT and Oxford offers some experimental proof to back this theory.

The recent study published in Nature, entitled “Supervised learning with quantum enhanced feature spaces”, demonstrates that currently available quantum computers can enhance a certain type of machine learning known as feature-mapping. While the researchers acknowledge that we still have far to go before we achieve quantum advantage for machine learning, they have high hopes that the feature-mapping method could ultimately enable computation of far more complex data sets than currently possible. This news indicates that perhaps quantum and AI will collide much sooner than expected. 

According to Dr. Jerry Chow, Manager of Experimental Quantum Computing at IBM: “[The researcher’s] approach to quantum machine learning provides us with a path to understanding in which way even noisy, intermediate-scale quantum computers can outperform classical machine learning algorithms”. The team of researchers were able to achieve nearly perfect classification of their input data using a two-bit quantum computing system. These results suggest that quantum computing is likely to have a major impact on machine learning, improving the performance of machine learning to a speed and efficacy on a scale much larger than ever before, sooner rather than later.

Still, reaching a fully conscious AI is a far-off goal, if it can ever be achieved, and fully functional quantum computers may not sit just around the corner either. Regardless, researchers hope that quantum computing will speed up the process and help us to create better AI, faster. At the same time, AI is well-suited to assist in the development of quantum systems. It can play a critical role in helping scientists to make sense of the vast amounts of data it can produce by identifying patterns and creating systems to understand it at a rate far faster and more efficient than human beings are capable of.

As indicated by the aforementioned research and by the quantum-community at large, the quantum-AI horizon draws ever nearer. Within this context, it is essential to explore how this fusion of two of the world’s most powerful technologies will impact our lives, as it surely will. This discussion, not only confined to the topic of quantum and AI but to all emerging technologies, is characterized by a tendency toward polar extremes. The resulting debate is often charged either with fear-based language or by exaggerated promise and hype. This investigation seeks to explore the space in between, in hopes of promoting a more measured and nuanced approach to discussing both the promises and perils of the quantum-AI nexus.

Quantum computers remain a bit of a unicorn concept to many, an elusive idea we hear much about but understand little of. While many have heard that quantum computing should positively impact a number of sectors through, for example, drug discovery, climate forecasting and financial modelling, most of us would not know how to describe what a quantum computer even looks like. The reality is that this will not change for a long time. The conditions needed for the functioning of a quantum computer, including a lack of environmental noise interference and a perfect zero-degree temperature, requires the highly controlled environment of a laboratory setting. As such, quantum computers will not replace personal computers any time soon. However, the impact they will have and in fact are already having even before quantum supremacy has been reached, will affect us in many ways.

AI, on the other hand, is something that most of us already interact with on a daily basis. Many understand that the capabilities of AI are only as good as the amount and quality of data an algorithm can crunch. Companies like Google have been feeding algorithms mass amounts of data for the last couple of years in an effort to boost its capacity to form “real” human interactions. Did you think that the predictive text function on Gmail was only introduced to help you write emails more quickly? Think again. And Google is far from alone in this practice. Most companies and even many governments collect our data, which in most cases we consent to giving them. Society has grown comfortable with this thoroughly proliferating phenomenon, believing that the information captured will be applied to harmless ends, such as improving the user experience. While this is partly true, it is important to understand that the trajectory of most technologies is not contained within their declared use-value.

For this reason, we seek to explore the scope of quantum-AI fusion from a social impact perspective. The path to reach this destination as well as our resultant arrival, presents a host of ethical challenges to confront. At the same time, these technologies promise to revolutionize a diverse array of systems and practices across a broad range of disciplines that could make this world a better and fairer place for many. The following segments will explore some of the possible positive and also the potential negative impacts that the merging of quantum and AI could unleash. In doing so, we hope that the entire spectrum of these emerging technology’s social impact will be considered. With the understanding that in order for the peaks of advancement and positive impact to be reached, the troughs too must be thoroughly investigated, we present an exploration of where we could channel and regulate these technologies in order to boost the positive and diminish the negative impacts they will produce.

Join us next week as we publish the first follow-up in this series on the promises of quantum-AI.

Artificial Intelligence

Moving Beyond Ethics in Data Science


No Comments

Image Credit: Centre for Translational Data Science, University of Sydney.

Alexander Vipond

At the recent Ethics of Data Science conference, hosted by the Centre for Translational Data Science at the University of Sydney, an interdisciplinary panoply of software engineers, machine learning experts, clinicians and lawyers came together to discuss how artificial intelligence and big data is changing society.

What quickly became clear was that technological advancement in the field is moving so fast that participants were grappling with not only the recent and future impacts on their industries but the sheer pace of technological change itself.

Some presenters argued that the recent talk of ethical AI principles from big tech companies was merely a form of ethics washing, a strategic ploy to delay and weaken regulation on privacy, dangerous content and data rights. Other speakers opined that ethics were simply not enough: that in order for them to be of real value to society we need to move beyond ethical principles to enforceable laws, changes to organisational cultures and clear regulations.

Many of the legal experts in attendance outlined the knowledge gap between technologists and other parts of society, citing the need to properly educate judges, policymakers and politicians on AI so they can make informed decisions. These arguments highlighted the Australian Government’s recent push to strengthen penalties for companies who breach privacy regulations, accompanied by an increase in funding for the Office of the Information Commissioner to pursue data breaches. The recent acknowledgement of Attorney General Christian Porter, as well as by panelists at the conference, that Australian data laws are insufficient to protect citizens in the current environment led to many proposals for change.

These included Australian states joining the European Union’s General Data Protection Regulation and adopting international human rights law as a framework for wider regulation of emerging technologies. There was also a concerted focus on how to protect marginalised communities most at risk of exploitation. For example, many presenters noted algorithms that reinforced racism in US prison sentencing or sexism in recruitment practices. 

On this front, many of the technical presentations delivered a variety of methods to ensure greater fairness in the design process of machine learning algorithms and outlined the important technical limitations and trade offs that needed to be considered when companies want to harness the power of artificial intelligence. The difference between ethical principles and the formal mathematical models used to embed them in tech, the types of questions machine learning can and can’t answer, and how to reduce bias in data-sets gave the interdisciplinary audience a display of the improvements that could be made with creative thinking and a consideration of a broader worldview.

This gave rise to questions of how to address inclusiveness in the industry and the geopolitical spectre of business and state-based competition. For while this has led to huge investment it has also prompted a new technological race, the consequences of which must be balanced so that positive breakthroughs for society can be maximised and risks can be addressed. The foundation of clear laws and a national strategy on AI in Australia (with funding to support implementation) are yet to be laid. The conference gave participants a window into what organisational coordination and creative solutions could be embraced with strong leadership from government and industry.

The author would like to thank Dr Roman Marchant, Professor Sally Cripps, Professor Nick Enfield and the Advisory board for organising the conference.

Artificial Intelligence

The Robots in Your Supermarket


No Comments

Jayson Waters

AI and other algorithmic technologies have long played a major role in global society and governance. We have previously explored how ‘dumb’ AI supports and maintains fundamental infrastructure and services. In an interesting turn that will impact your shopping basket more than warfare, supermarket chains around the world have begun implementing AI enabled facial recognition and tracking in their stores.  

According to a recent article by Bryan Pearson published in Forbes, stores such as Walgreens, Guess, and Kroger are using AI systems to tailor and target ads to customers. Unlike traditional sales database systems that can only track individual products, AI systems can track purchasing patterns and trends en masse. In turn this information can be used to better determine the susceptibility of customers to various ads and ultimately adapt the shopping experience – from lighting to pricing – to individual users in-store.

Guess and Alibaba have teamed up to create a flagship smartstore in which everything from mirrors, clothing racks, and fitting rooms will have embedded AI. According to Edward Park, senior vice president at Guess North America, “Every item is enabled with Bluetooth low-energy chips, RFID and motion sensors, which enable all of our inventory to be tracked and analyzed.” This data, if analysed properly will also allow Guess to monitor traffic patterns and customer interest in items.

On the plus side AI technologies could allow companies to better predict which products will appeal to customers and thus avoid waste by not producing certain items. Alternatively, the greater depth of knowledge regarding customer preferences may allow advertisers to influence customer choices to a greater degree than ever before.

See here for the full article.

Quantum Research

The Quantum Question of an Objective Reality


2 Comments
Rick and Morty explore the multiverse, a spin on the Many Worlds Interpretation.
Image via Adult Swim

Gabriella Skoff

Thought experiments in the domain of quantum physics have long captured the public imagination with their strange and “spooky” nature. Schrödinger’s at once dead-and-alive cat and its lesser-known extension, Eugene Wigner’s eponymous Wigner’s Friend, are two famous thought experiments which examine the concept of superposition and the role of the observer in quantum interactions. Until very recently, quantum technologies were simply not advanced enough to replicate Wigner’s Friend and an experiment modelled on Schrödinger’s Cat would no doubt raise serious ethical concerns for animal rights. As such, since their inception these thought experiments have been relegated to the realm of theory and imagination.

That changed last week, when Massimiliano Proietti and his team at Heriot-Watt University in Edinburgh succeeded in performing an experiment modelled on the Wigner’s friend scenario in a laboratory setting. Through this experiment, the researchers sought to explore what is known as the measurement problem—the question of how, and if, the wave function collapse occurs—the central problem in quantum mechanical interpretations.

Using the groundwork previously laid by researchers from the University of Vienna in Austria, the Edinburgh team carried out an extension of the Wigner’s Friend scenario using a “state-of-the-art 6 photon experiment”. The researchers used six entangled photons to simulate a scenario in which the role of both Wigner and his friend were occupied by measuring equipment instead of scientists. As in the thought experiment: “Wigner’s friend measures the polarization of a photon and stores the result. Wigner then performs an interference measurement to determine if the measurement and the photon are in a superposition.”

The experimental setup, as depicted by the researchers. image via
arxiv.org/abs/1902.05080 

The as yet unpublished results prove Wigner’s theory correct. The researcher’s findings suggest that two observers of a quantum interaction can observe two different realities, which are both equally real and correct simultaneously, even if they contradict each other. The implication of this assertion is that in quantum physics there is no objective reality; that reality itself is observer-dependent. The authors of the study suggest that these results necessitate an evolution in our understanding of quantum theory, a shift toward theoretical frameworks that are observer-dependent and away from interpretations that are not.

The impact of this conclusion, which proposes an unconventional interpretation of the notion of reality, could extend far beyond the discipline of physics.

Strikingly, the assumption that multiple, contradictory realities can coexist calls the concept of objective fact—the very pursuit of science itself—into question. This point, posed in an article by the MIT Tech Review, jeopardizes the assumption of the existence of “universal facts”. How might an understanding of the world around us, in which there is no shared, objective reality, change not just science but also social theory?

Of course, it is hasty to argue that quantum theory applies seamlessly to the social world, suggesting there is a direct, logical mapping. Thus far, the topic of how the microscopic quantum world effects our macroscopic, visible world has not been fully explored through research. That does not mean, however, that there is no symmetry. The question of the universality of quantum theory continues to permeate thinking today, much as it had captured the imagination of quantum theorists in the early 1900’s.

Schrödinger’s Cat (1935), for example, explores the question of the relationship between quantum and classical reality. Among other revelations, this thought experiment suggests that projecting nanoscale quantum theory onto a macro-scale experiment produces logic-defying results, ultimately leading to the conclusion that a cat cannot be both alive and dead at the same time. Schrödinger wished to argue that the dominant Copenhagen Interpretation of quantum physics, which states that an object in a state of quantum superimposition can exist in all possible configurations, does not hold at the macroscale.

Nevertheless, this problem posed by the Copenhagen Interpretation, considered by Schrödinger to be settled by his theoretical experiment, persists.

The findings of the Edinburgh team suggest that in fact Schrödinger’s cat can be both dead and alive at the same time, leading to a whole new set of questions and theories. One way to accommodate for the experiment’s result, the authors write: “…is by proclaiming that “facts of the world” can only be established by a privileged observer—e.g., one that would have access to the “global wavefunction” in the many worlds interpretation or Bohmian mechanics.”

As the authors suggest, this research potentially validates the Many-Worlds Interpretation (MWI). The MWI, as the name suggests, stipulates that each quantum interaction produces not just one result but all possible results, which exist simultaneous, branching off to form different versions of reality and producing many independent histories, futures and worlds. The researchers propose that in order for the concept of objective reality to function in the context of their findings, the holder of that knowledge must, in a godlike fashion, have access to all information from every possible reality.

Scientific theory produces claims at knowing and understanding the world around us as it really is. Quantum physics, however, has the potential to unravel this by posing the most fundamental question of all: What is reality?

Artificial Intelligence

Words of Consequence


No Comments
Many hands make light work when AI can automate content generation. Image credit: Ghost in the Shell (2017)

Alexander Vipond

OpenAI, a New York based AI research organisation, has created a new text generator called GPT-2. The transformer based AI language modelling system is capable of generating large amounts of high-quality synthetic text, perform reading comprehension, machine translation and summarisation in seven languages.

GPT-2 is designed to predict the next words that come from a text sample. Trained on 8 million webpages from the internet, it utilises zero-shot learning, which means that it can extrapolate short text samples into full articles based on entirely new information

As a general-purpose system, GPT-2 has been able to beat several records held by different domain specific language modelling systems and significantly lowers the complexity and cost of deploying scalable and customisable text generation.

Here’s an example:

Sample prompt: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

Model Completion (machine-written, first try)

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.”

The system constructs a story, creating fake institutions, places and quotations; blending them to produce the illusion of authenticity. The model isn’t perfect and it may take several tries to get a human quality tranche of text but the speed and scale of the system are such that OpenAI has withheld the full system from publication.

This is because GPT-2 could be used by malicious actors to commit large scale information warfare and undermine the one of the central principles of the cybersecurity triad: the integrity of information.

OpenAI has acknowledged this threat, citing GPT-2’s capacity to generate fake news en masse, impersonate others, automate fake content, phishing and spam. Through tweaking the system one can produce infinite positive or negatively angled articles. It is also possible one could customise it for specific issues to improve the veracity of supporting information in the synthetic content it produces, making it all the more difficult to tell fact from fiction. OpenAI have stated they expect tools like GPT-2 to be available in the next two years.

As dictatorships and authoritarian regimes actively seek to spread misinformation to disrupt elections, obfuscate wars, and insist assassins prefer to spend their time admiring English churches, GPT-2 is a highly attractive tool and a warning of what’s to come.

The malicious use of AI tools will challenge the integrity of the global digital commons, fuelled by states who view the open flow of information as a threat to their governance. The tools will then be passed down to organised crime and developing regimes. As the recent case of Project Raven shows, even as countries are increasingly trying to secure their intellectual property; their cyber tools and tactics are up for sale.

As William Gibson once said “the future is already here, it’s just unevenly distributed”. So now that we know the threat is here, what can we do to counter the risks at the different levels of its distribution?

OpenAI will continue their research.

Quantum International Relations

Breaking the Internet: A Question of Timing


No Comments

Are we running out of time to save the internet? Image Credit: The Melting Watch, Salvador Dali, 1954.

Alexander Vipond

One of the most hyped topics in quantum computing is the potential for quantum computers to break the internet’s security protocols. It has been surmised that by running two theoretical algorithms on a quantum computer, Grover’s and Shor’s algorithms (designed by Lou Grover and Peter Shor in 1996 and 1994), one could break the cryptographic encryption standards of the internet.  Grover’s algorithm could be used to subvert HTTPS (Hypertext Transfer Protocol Secure) connections which authenticate websites as you browse the internet. Shor’s algorithm could break the RSA public key cryptosystem (named after Ron Rivest, Adi Shamir and Leonard Adleman) which secures everything from your online bankcard transactions, to email and phone calls.

However, this all requires a powerful quantum computer yet to be invented. How many qubits would be necessary to run these algorithms? What scientific breakthroughs are necessary? How long will it take to build one? Well the National Academy of Sciences in the US released a report titled “Quantum Computing: Progress and Prospects” which details not only the technical difficulties of racing to break the internet but the human challenges which lie in creating a secure post-quantum world.

The report presents two key findings.

One: Given the current state of quantum computing and recent rates of progress, it is highly unlikely that a quantum computer able to compromise RSA or any comparable discrete logarithmic public key cryptosystem will be built for a decade.

Two: Even if a quantum computer that can decrypt current cryptographic ciphers is more than decade off, the hazard of such a machine is high enough – and the time frame for transitioning to a new security protocol sufficiently long and uncertain – that the prioritisation of the development, standardisation and deployment of post quantum cryptography is critical for minimising the chance of a potential security and privacy disaster.

This demonstrates the severity of the risk that a powerful quantum computer poses despite the timeline towards its realisation.  

The National Institute of Standards and Technology (NIST) in the US has a post quantum cryptography project which held submissions for new post quantum cryptosystems last year, with 69 proposals passing the first round. NIST has proposed a timeline of 2022-2024 in which a new draft standard for the world will be created. This leaves only a few years to whittle down and test these cryptosystems to find a new standard.

The key issues are time and human cooperation. As Adi Shamir noted at the last RSA cryptography panel, transforming a new cryptosystem into a widely adopted standard takes about 15 years. For both RSA and Elliptic Curve cryptography this was the case. This is partially a function of the small size of the cryptography community, numbering only in the thousands of people globally. This makes it difficult to test multiple cryptosystems effectively and NIST only has three years to choose a successor standard for a post-quantum world. So, it is highly likely they will rely on older tested standards and increase their bit size, while new cryptosystems will take decades longer to be tested.

Newer cryptosystems may well benefit from this time lag as researchers will be able to gain an increasingly clearer view of what quantum computers are actually capable of and refine quantum resistant cryptosystems appropriately as the technologies develop in tandem. If the current transition is managed carefully, global standards developed and adequate resources provided for the switchover, it could be possible to move safely into a post-quantum world.

This does however rely on two large unknown variables. The first is the rate of scientific breakthroughs to complete a quantum computer capable of attacking strong encryption. The second is the intent of the actor who procures the capability. If breakthroughs are made faster than the global community can adopt new standards, countries will be left exposed. As this type of research is often conducted in secret, the global community may not have easily identifiable progress markers to measure against. The second variable is more pernicious. If a company reaches the milestone first, it is likely to announce its victory and is unlikely to undermine the internet infrastructure that secures its profits. However, if a country reaches the milestone first, it may wish to attack and steal data for geopolitical advantage or commercial gain, and the world may not know until the first attack occurs.

This puts the race to break the internet into perspective. It is a decade’s long systemic risk that intertwines both human and technical problems into a game that represents the apex of trust, security and privacy in the world’s most important communications system.

Quantum Applications

Can quantum technologies help save the world?


No Comments
Image via United Nations University

Part 3 of 3: Modelling 

Gabriella Skoff

The final instalment in this series explores the modelling capacity that quantum computing promises to unlock. Modelling is a key tool in environmental security, enabling scientists and researchers to explore how the natural environment will react to changing conditions over time. It is well-known that quantum computers will enable advanced modelling technology by exponentially expanding the rate and scope of mathematical modelling capacity well beyond that of today’s computers. While the impact of this is most often cited with regard to chemical reactions and the pharmaceutical and health industries, environmental security, too, will be a great beneficiary of this quantum application.

Quantum computers will enable wider and more in-depth analysis of complex problems with more variables than ever before, a perfect tool when observing and predicting environmental challenges posed by the multitude of human and natural forces that abound. Quantum computational modelling will be exactly suited to sorting through these types of complexities that classical computers struggle with. The potential impact for this application will reach through weather forecasting to disaster preparedness. As one researcher writes of the promise quantum computing holds for numerical weather prediction (NWP):

The seamless systems based on the unified technology will process observational data, produce weather, climate, and environment forecasts on time scales from several minutes to decades; they will compute the probability of the occurrence of meteorological, hydrological, marine, and geophysical severe natural phenomena for many spatial scales.

The importance of that potential is not to be undervalued. While the practical value of this technology is obvious, the hidden impact this holds for environmental policy is immense.

No other stress contributes as much to environmental insecurity as that of climate change. This macro-level problem has so far proven to be “too big” to tackle effectively on a global governance scale, with climate change deniers and sceptics in both lay and science communities. The main reason for the lack of a complete scientific consensus on climate change, which can be argued, significantly validates climate change denial on the lay-level, is the lack of power in climate change forecasting and models. Of course, with the immensity of variables and factors at hand on a timescale of years or even decades, it is no easy task for our current computers to process all of this data and create accurate climate change models. Even on a daily basis, this presents an incredible challenge, with weather conditions varying from hour to hour. There is always uncertainty in weather modelling due to the changeability of a variety of meteorological factors. How many times you have heard on the morning news that heavy rain is forecasted and packed your umbrella only to carry it around uselessly with you as the sun shone all day long?

Although accurate climate change modelling may flummox a classical computer, this job may prove exactly the sort of task that a quantum computer would excel at. Provided with accurate and reliable modelling of climate change, perhaps the remaining 3% of climate change sceptics in the scientific community could be convinced of the urgency and need to promote sustainable environmental policy in order to combat climate change. Of course, even with 100% consensus amongst the scientific community, climate change deniers will still resent the government funding and lifestyle changes that will inevitably be needed to induce mass change. However, achieving the consensus may prove to be the impetus society needs in order to prioritise that change.

Quantum technologies hold immense promise for confronting the multifaceted challenge of environmental security. As with most things quantum, we cannot predict with certainty; but time—along with an appropriate prioritization of resources to our greatest collective threat— will decide just how helpful these applications will truly be.

Gabriella Skoff is a Researcher with Project Q and collaborates with Dr Serdar Turkeli of the United Nations University-MERIT, where she continues her research on the topic of emerging quantum technologies and environmental sustainability.