Artificial Intelligence, Quantum Computing

When Quantum Meets AI: Promises and Perils as Two of Our Future’s Most Powerful Technologies Begin to Collide


No Comments

Image via GCN

Part I of III

Gabriella Skoff

The game-changing role that quantum computing is set to play in machine learning is a topic of conversation shrouded in much hype. Theoretically, quantum computing could increase algorithmic capacity to crunch large datasets, enhancing the performance of deep learning and accelerating the progress of artificial intelligence (AI) much more quickly than what is currently possible using conventional computers. While scientists have theorized that the power of machine learning could be exponentially enhanced by quantum computing, it has generally been believed that current quantum technology has not yet reached the level of maturity needed to lend this essential boost. New research, however, presented by a joint team of researchers from IBM Research, MIT and Oxford offers some experimental proof to back this theory.

The recent study published in Nature, entitled “Supervised learning with quantum enhanced feature spaces”, demonstrates that currently available quantum computers can enhance a certain type of machine learning known as feature-mapping. While the researchers acknowledge that we still have far to go before we achieve quantum advantage for machine learning, they have high hopes that the feature-mapping method could ultimately enable computation of far more complex data sets than currently possible. This news indicates that perhaps quantum and AI will collide much sooner than expected. 

According to Dr. Jerry Chow, Manager of Experimental Quantum Computing at IBM: “[The researcher’s] approach to quantum machine learning provides us with a path to understanding in which way even noisy, intermediate-scale quantum computers can outperform classical machine learning algorithms”. The team of researchers were able to achieve nearly perfect classification of their input data using a two-bit quantum computing system. These results suggest that quantum computing is likely to have a major impact on machine learning, improving the performance of machine learning to a speed and efficacy on a scale much larger than ever before, sooner rather than later.

Still, reaching a fully conscious AI is a far-off goal, if it can ever be achieved, and fully functional quantum computers may not sit just around the corner either. Regardless, researchers hope that quantum computing will speed up the process and help us to create better AI, faster. At the same time, AI is well-suited to assist in the development of quantum systems. It can play a critical role in helping scientists to make sense of the vast amounts of data it can produce by identifying patterns and creating systems to understand it at a rate far faster and more efficient than human beings are capable of.

As indicated by the aforementioned research and by the quantum-community at large, the quantum-AI horizon draws ever nearer. Within this context, it is essential to explore how this fusion of two of the world’s most powerful technologies will impact our lives, as it surely will. This discussion, not only confined to the topic of quantum and AI but to all emerging technologies, is characterized by a tendency toward polar extremes. The resulting debate is often charged either with fear-based language or by exaggerated promise and hype. This investigation seeks to explore the space in between, in hopes of promoting a more measured and nuanced approach to discussing both the promises and perils of the quantum-AI nexus.

Quantum computers remain a bit of a unicorn concept to many, an elusive idea we hear much about but understand little of. While many have heard that quantum computing should positively impact a number of sectors through, for example, drug discovery, climate forecasting and financial modelling, most of us would not know how to describe what a quantum computer even looks like. The reality is that this will not change for a long time. The conditions needed for the functioning of a quantum computer, including a lack of environmental noise interference and a perfect zero-degree temperature, requires the highly controlled environment of a laboratory setting. As such, quantum computers will not replace personal computers any time soon. However, the impact they will have and in fact are already having even before quantum supremacy has been reached, will affect us in many ways.

AI, on the other hand, is something that most of us already interact with on a daily basis. Many understand that the capabilities of AI are only as good as the amount and quality of data an algorithm can crunch. Companies like Google have been feeding algorithms mass amounts of data for the last couple of years in an effort to boost its capacity to form “real” human interactions. Did you think that the predictive text function on Gmail was only introduced to help you write emails more quickly? Think again. And Google is far from alone in this practice. Most companies and even many governments collect our data, which in most cases we consent to giving them. Society has grown comfortable with this thoroughly proliferating phenomenon, believing that the information captured will be applied to harmless ends, such as improving the user experience. While this is partly true, it is important to understand that the trajectory of most technologies is not contained within their declared use-value.

For this reason, we seek to explore the scope of quantum-AI fusion from a social impact perspective. The path to reach this destination as well as our resultant arrival, presents a host of ethical challenges to confront. At the same time, these technologies promise to revolutionize a diverse array of systems and practices across a broad range of disciplines that could make this world a better and fairer place for many. The following segments will explore some of the possible positive and also the potential negative impacts that the merging of quantum and AI could unleash. In doing so, we hope that the entire spectrum of these emerging technology’s social impact will be considered. With the understanding that in order for the peaks of advancement and positive impact to be reached, the troughs too must be thoroughly investigated, we present an exploration of where we could channel and regulate these technologies in order to boost the positive and diminish the negative impacts they will produce.

Join us next week as we publish the first follow-up in this series on the promises of quantum-AI.

Artificial Intelligence

Moving Beyond Ethics in Data Science


No Comments

Image Credit: Centre for Translational Data Science, University of Sydney.

Alexander Vipond

At the recent Ethics of Data Science conference, hosted by the Centre for Translational Data Science at the University of Sydney, an interdisciplinary panoply of software engineers, machine learning experts, clinicians and lawyers came together to discuss how artificial intelligence and big data is changing society.

What quickly became clear was that technological advancement in the field is moving so fast that participants were grappling with not only the recent and future impacts on their industries but the sheer pace of technological change itself.

Some presenters argued that the recent talk of ethical AI principles from big tech companies was merely a form of ethics washing, a strategic ploy to delay and weaken regulation on privacy, dangerous content and data rights. Other speakers opined that ethics were simply not enough: that in order for them to be of real value to society we need to move beyond ethical principles to enforceable laws, changes to organisational cultures and clear regulations.

Many of the legal experts in attendance outlined the knowledge gap between technologists and other parts of society, citing the need to properly educate judges, policymakers and politicians on AI so they can make informed decisions. These arguments highlighted the Australian Government’s recent push to strengthen penalties for companies who breach privacy regulations, accompanied by an increase in funding for the Office of the Information Commissioner to pursue data breaches. The recent acknowledgement of Attorney General Christian Porter, as well as by panelists at the conference, that Australian data laws are insufficient to protect citizens in the current environment led to many proposals for change.

These included Australian states joining the European Union’s General Data Protection Regulation and adopting international human rights law as a framework for wider regulation of emerging technologies. There was also a concerted focus on how to protect marginalised communities most at risk of exploitation. For example, many presenters noted algorithms that reinforced racism in US prison sentencing or sexism in recruitment practices. 

On this front, many of the technical presentations delivered a variety of methods to ensure greater fairness in the design process of machine learning algorithms and outlined the important technical limitations and trade offs that needed to be considered when companies want to harness the power of artificial intelligence. The difference between ethical principles and the formal mathematical models used to embed them in tech, the types of questions machine learning can and can’t answer, and how to reduce bias in data-sets gave the interdisciplinary audience a display of the improvements that could be made with creative thinking and a consideration of a broader worldview.

This gave rise to questions of how to address inclusiveness in the industry and the geopolitical spectre of business and state-based competition. For while this has led to huge investment it has also prompted a new technological race, the consequences of which must be balanced so that positive breakthroughs for society can be maximised and risks can be addressed. The foundation of clear laws and a national strategy on AI in Australia (with funding to support implementation) are yet to be laid. The conference gave participants a window into what organisational coordination and creative solutions could be embraced with strong leadership from government and industry.

The author would like to thank Dr Roman Marchant, Professor Sally Cripps, Professor Nick Enfield and the Advisory board for organising the conference.

Artificial Intelligence

The Robots in Your Supermarket


No Comments

Jayson Waters

AI and other algorithmic technologies have long played a major role in global society and governance. We have previously explored how ‘dumb’ AI supports and maintains fundamental infrastructure and services. In an interesting turn that will impact your shopping basket more than warfare, supermarket chains around the world have begun implementing AI enabled facial recognition and tracking in their stores.  

According to a recent article by Bryan Pearson published in Forbes, stores such as Walgreens, Guess, and Kroger are using AI systems to tailor and target ads to customers. Unlike traditional sales database systems that can only track individual products, AI systems can track purchasing patterns and trends en masse. In turn this information can be used to better determine the susceptibility of customers to various ads and ultimately adapt the shopping experience – from lighting to pricing – to individual users in-store.

Guess and Alibaba have teamed up to create a flagship smartstore in which everything from mirrors, clothing racks, and fitting rooms will have embedded AI. According to Edward Park, senior vice president at Guess North America, “Every item is enabled with Bluetooth low-energy chips, RFID and motion sensors, which enable all of our inventory to be tracked and analyzed.” This data, if analysed properly will also allow Guess to monitor traffic patterns and customer interest in items.

On the plus side AI technologies could allow companies to better predict which products will appeal to customers and thus avoid waste by not producing certain items. Alternatively, the greater depth of knowledge regarding customer preferences may allow advertisers to influence customer choices to a greater degree than ever before.

See here for the full article.

Quantum Research

The Quantum Question of an Objective Reality


2 Comments
Rick and Morty explore the multiverse, a spin on the Many Worlds Interpretation.
Image via Adult Swim

Gabriella Skoff

Thought experiments in the domain of quantum physics have long captured the public imagination with their strange and “spooky” nature. Schrödinger’s at once dead-and-alive cat and its lesser-known extension, Eugene Wigner’s eponymous Wigner’s Friend, are two famous thought experiments which examine the concept of superposition and the role of the observer in quantum interactions. Until very recently, quantum technologies were simply not advanced enough to replicate Wigner’s Friend and an experiment modelled on Schrödinger’s Cat would no doubt raise serious ethical concerns for animal rights. As such, since their inception these thought experiments have been relegated to the realm of theory and imagination.

That changed last week, when Massimiliano Proietti and his team at Heriot-Watt University in Edinburgh succeeded in performing an experiment modelled on the Wigner’s friend scenario in a laboratory setting. Through this experiment, the researchers sought to explore what is known as the measurement problem—the question of how, and if, the wave function collapse occurs—the central problem in quantum mechanical interpretations.

Using the groundwork previously laid by researchers from the University of Vienna in Austria, the Edinburgh team carried out an extension of the Wigner’s Friend scenario using a “state-of-the-art 6 photon experiment”. The researchers used six entangled photons to simulate a scenario in which the role of both Wigner and his friend were occupied by measuring equipment instead of scientists. As in the thought experiment: “Wigner’s friend measures the polarization of a photon and stores the result. Wigner then performs an interference measurement to determine if the measurement and the photon are in a superposition.”

The experimental setup, as depicted by the researchers. image via
arxiv.org/abs/1902.05080 

The as yet unpublished results prove Wigner’s theory correct. The researcher’s findings suggest that two observers of a quantum interaction can observe two different realities, which are both equally real and correct simultaneously, even if they contradict each other. The implication of this assertion is that in quantum physics there is no objective reality; that reality itself is observer-dependent. The authors of the study suggest that these results necessitate an evolution in our understanding of quantum theory, a shift toward theoretical frameworks that are observer-dependent and away from interpretations that are not.

The impact of this conclusion, which proposes an unconventional interpretation of the notion of reality, could extend far beyond the discipline of physics.

Strikingly, the assumption that multiple, contradictory realities can coexist calls the concept of objective fact—the very pursuit of science itself—into question. This point, posed in an article by the MIT Tech Review, jeopardizes the assumption of the existence of “universal facts”. How might an understanding of the world around us, in which there is no shared, objective reality, change not just science but also social theory?

Of course, it is hasty to argue that quantum theory applies seamlessly to the social world, suggesting there is a direct, logical mapping. Thus far, the topic of how the microscopic quantum world effects our macroscopic, visible world has not been fully explored through research. That does not mean, however, that there is no symmetry. The question of the universality of quantum theory continues to permeate thinking today, much as it had captured the imagination of quantum theorists in the early 1900’s.

Schrödinger’s Cat (1935), for example, explores the question of the relationship between quantum and classical reality. Among other revelations, this thought experiment suggests that projecting nanoscale quantum theory onto a macro-scale experiment produces logic-defying results, ultimately leading to the conclusion that a cat cannot be both alive and dead at the same time. Schrödinger wished to argue that the dominant Copenhagen Interpretation of quantum physics, which states that an object in a state of quantum superimposition can exist in all possible configurations, does not hold at the macroscale.

Nevertheless, this problem posed by the Copenhagen Interpretation, considered by Schrödinger to be settled by his theoretical experiment, persists.

The findings of the Edinburgh team suggest that in fact Schrödinger’s cat can be both dead and alive at the same time, leading to a whole new set of questions and theories. One way to accommodate for the experiment’s result, the authors write: “…is by proclaiming that “facts of the world” can only be established by a privileged observer—e.g., one that would have access to the “global wavefunction” in the many worlds interpretation or Bohmian mechanics.”

As the authors suggest, this research potentially validates the Many-Worlds Interpretation (MWI). The MWI, as the name suggests, stipulates that each quantum interaction produces not just one result but all possible results, which exist simultaneous, branching off to form different versions of reality and producing many independent histories, futures and worlds. The researchers propose that in order for the concept of objective reality to function in the context of their findings, the holder of that knowledge must, in a godlike fashion, have access to all information from every possible reality.

Scientific theory produces claims at knowing and understanding the world around us as it really is. Quantum physics, however, has the potential to unravel this by posing the most fundamental question of all: What is reality?

Artificial Intelligence

Words of Consequence


No Comments
Many hands make light work when AI can automate content generation. Image credit: Ghost in the Shell (2017)

Alexander Vipond

OpenAI, a New York based AI research organisation, has created a new text generator called GPT-2. The transformer based AI language modelling system is capable of generating large amounts of high-quality synthetic text, perform reading comprehension, machine translation and summarisation in seven languages.

GPT-2 is designed to predict the next words that come from a text sample. Trained on 8 million webpages from the internet, it utilises zero-shot learning, which means that it can extrapolate short text samples into full articles based on entirely new information

As a general-purpose system, GPT-2 has been able to beat several records held by different domain specific language modelling systems and significantly lowers the complexity and cost of deploying scalable and customisable text generation.

Here’s an example:

Sample prompt: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

Model Completion (machine-written, first try)

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.”

The system constructs a story, creating fake institutions, places and quotations; blending them to produce the illusion of authenticity. The model isn’t perfect and it may take several tries to get a human quality tranche of text but the speed and scale of the system are such that OpenAI has withheld the full system from publication.

This is because GPT-2 could be used by malicious actors to commit large scale information warfare and undermine the one of the central principles of the cybersecurity triad: the integrity of information.

OpenAI has acknowledged this threat, citing GPT-2’s capacity to generate fake news en masse, impersonate others, automate fake content, phishing and spam. Through tweaking the system one can produce infinite positive or negatively angled articles. It is also possible one could customise it for specific issues to improve the veracity of supporting information in the synthetic content it produces, making it all the more difficult to tell fact from fiction. OpenAI have stated they expect tools like GPT-2 to be available in the next two years.

As dictatorships and authoritarian regimes actively seek to spread misinformation to disrupt elections, obfuscate wars, and insist assassins prefer to spend their time admiring English churches, GPT-2 is a highly attractive tool and a warning of what’s to come.

The malicious use of AI tools will challenge the integrity of the global digital commons, fuelled by states who view the open flow of information as a threat to their governance. The tools will then be passed down to organised crime and developing regimes. As the recent case of Project Raven shows, even as countries are increasingly trying to secure their intellectual property; their cyber tools and tactics are up for sale.

As William Gibson once said “the future is already here, it’s just unevenly distributed”. So now that we know the threat is here, what can we do to counter the risks at the different levels of its distribution?

OpenAI will continue their research.

Quantum International Relations

Breaking the Internet: A Question of Timing


No Comments

Are we running out of time to save the internet? Image Credit: The Melting Watch, Salvador Dali, 1954.

Alexander Vipond

One of the most hyped topics in quantum computing is the potential for quantum computers to break the internet’s security protocols. It has been surmised that by running two theoretical algorithms on a quantum computer, Grover’s and Shor’s algorithms (designed by Lou Grover and Peter Shor in 1996 and 1994), one could break the cryptographic encryption standards of the internet.  Grover’s algorithm could be used to subvert HTTPS (Hypertext Transfer Protocol Secure) connections which authenticate websites as you browse the internet. Shor’s algorithm could break the RSA public key cryptosystem (named after Ron Rivest, Adi Shamir and Leonard Adleman) which secures everything from your online bankcard transactions, to email and phone calls.

However, this all requires a powerful quantum computer yet to be invented. How many qubits would be necessary to run these algorithms? What scientific breakthroughs are necessary? How long will it take to build one? Well the National Academy of Sciences in the US released a report titled “Quantum Computing: Progress and Prospects” which details not only the technical difficulties of racing to break the internet but the human challenges which lie in creating a secure post-quantum world.

The report presents two key findings.

One: Given the current state of quantum computing and recent rates of progress, it is highly unlikely that a quantum computer able to compromise RSA or any comparable discrete logarithmic public key cryptosystem will be built for a decade.

Two: Even if a quantum computer that can decrypt current cryptographic ciphers is more than decade off, the hazard of such a machine is high enough – and the time frame for transitioning to a new security protocol sufficiently long and uncertain – that the prioritisation of the development, standardisation and deployment of post quantum cryptography is critical for minimising the chance of a potential security and privacy disaster.

This demonstrates the severity of the risk that a powerful quantum computer poses despite the timeline towards its realisation.  

The National Institute of Standards and Technology (NIST) in the US has a post quantum cryptography project which held submissions for new post quantum cryptosystems last year, with 69 proposals passing the first round. NIST has proposed a timeline of 2022-2024 in which a new draft standard for the world will be created. This leaves only a few years to whittle down and test these cryptosystems to find a new standard.

The key issues are time and human cooperation. As Adi Shamir noted at the last RSA cryptography panel, transforming a new cryptosystem into a widely adopted standard takes about 15 years. For both RSA and Elliptic Curve cryptography this was the case. This is partially a function of the small size of the cryptography community, numbering only in the thousands of people globally. This makes it difficult to test multiple cryptosystems effectively and NIST only has three years to choose a successor standard for a post-quantum world. So, it is highly likely they will rely on older tested standards and increase their bit size, while new cryptosystems will take decades longer to be tested.

Newer cryptosystems may well benefit from this time lag as researchers will be able to gain an increasingly clearer view of what quantum computers are actually capable of and refine quantum resistant cryptosystems appropriately as the technologies develop in tandem. If the current transition is managed carefully, global standards developed and adequate resources provided for the switchover, it could be possible to move safely into a post-quantum world.

This does however rely on two large unknown variables. The first is the rate of scientific breakthroughs to complete a quantum computer capable of attacking strong encryption. The second is the intent of the actor who procures the capability. If breakthroughs are made faster than the global community can adopt new standards, countries will be left exposed. As this type of research is often conducted in secret, the global community may not have easily identifiable progress markers to measure against. The second variable is more pernicious. If a company reaches the milestone first, it is likely to announce its victory and is unlikely to undermine the internet infrastructure that secures its profits. However, if a country reaches the milestone first, it may wish to attack and steal data for geopolitical advantage or commercial gain, and the world may not know until the first attack occurs.

This puts the race to break the internet into perspective. It is a decade’s long systemic risk that intertwines both human and technical problems into a game that represents the apex of trust, security and privacy in the world’s most important communications system.

Quantum Applications

Can quantum technologies help save the world?


No Comments
Image via United Nations University

Part 3 of 3: Modelling 

Gabriella Skoff

The final instalment in this series explores the modelling capacity that quantum computing promises to unlock. Modelling is a key tool in environmental security, enabling scientists and researchers to explore how the natural environment will react to changing conditions over time. It is well-known that quantum computers will enable advanced modelling technology by exponentially expanding the rate and scope of mathematical modelling capacity well beyond that of today’s computers. While the impact of this is most often cited with regard to chemical reactions and the pharmaceutical and health industries, environmental security, too, will be a great beneficiary of this quantum application.

Quantum computers will enable wider and more in-depth analysis of complex problems with more variables than ever before, a perfect tool when observing and predicting environmental challenges posed by the multitude of human and natural forces that abound. Quantum computational modelling will be exactly suited to sorting through these types of complexities that classical computers struggle with. The potential impact for this application will reach through weather forecasting to disaster preparedness. As one researcher writes of the promise quantum computing holds for numerical weather prediction (NWP):

The seamless systems based on the unified technology will process observational data, produce weather, climate, and environment forecasts on time scales from several minutes to decades; they will compute the probability of the occurrence of meteorological, hydrological, marine, and geophysical severe natural phenomena for many spatial scales.

The importance of that potential is not to be undervalued. While the practical value of this technology is obvious, the hidden impact this holds for environmental policy is immense.

No other stress contributes as much to environmental insecurity as that of climate change. This macro-level problem has so far proven to be “too big” to tackle effectively on a global governance scale, with climate change deniers and sceptics in both lay and science communities. The main reason for the lack of a complete scientific consensus on climate change, which can be argued, significantly validates climate change denial on the lay-level, is the lack of power in climate change forecasting and models. Of course, with the immensity of variables and factors at hand on a timescale of years or even decades, it is no easy task for our current computers to process all of this data and create accurate climate change models. Even on a daily basis, this presents an incredible challenge, with weather conditions varying from hour to hour. There is always uncertainty in weather modelling due to the changeability of a variety of meteorological factors. How many times you have heard on the morning news that heavy rain is forecasted and packed your umbrella only to carry it around uselessly with you as the sun shone all day long?

Although accurate climate change modelling may flummox a classical computer, this job may prove exactly the sort of task that a quantum computer would excel at. Provided with accurate and reliable modelling of climate change, perhaps the remaining 3% of climate change sceptics in the scientific community could be convinced of the urgency and need to promote sustainable environmental policy in order to combat climate change. Of course, even with 100% consensus amongst the scientific community, climate change deniers will still resent the government funding and lifestyle changes that will inevitably be needed to induce mass change. However, achieving the consensus may prove to be the impetus society needs in order to prioritise that change.

Quantum technologies hold immense promise for confronting the multifaceted challenge of environmental security. As with most things quantum, we cannot predict with certainty; but time—along with an appropriate prioritization of resources to our greatest collective threat— will decide just how helpful these applications will truly be.

Gabriella Skoff is a Researcher with Project Q and collaborates with Dr Serdar Turkeli of the United Nations University-MERIT, where she continues her research on the topic of emerging quantum technologies and environmental sustainability. 

Quantum Applications

Can quantum technologies help save the world?


No Comments
Image via United Nations University

Part 2 of 3: Energy

Gabriella Skoff

Part two of this series explores how, in the field of renewable energy, quantum technology has been quietly pushing ahead to improve the efficiency and cost of green energy. Quantum qualities hold vast promise for commercial applications in solar power and other cutting-edge, sustainable energy technology. These emerging technologies could hold the key to shifting renewable energy into the mainstream, finally making it cheaper and more efficient than traditional energy sources for the general population.

Quantum dots, used to convert sunlight to energy with increasing efficiency, are quickly becoming the new material of choice for solar panels. Due to their nanoscale size, quantum dot sensitized solar cells (QDSSC) have unique properties which allow them to convert more energy from the sun than traditional materials. These third-generation quantum solar panels have reduced weight, improved flexibility and, importantly, are cheaper to make than previous generations of solar technology. This application of quantum technology could be a huge breakthrough in the solar market, enabling it to be more competitive, both in terms of cost and efficacy, than traditional energy sources.

While this technology is still in the pre-commercial stages of development, quantum photovoltaic systems are expected to make a big impact on the renewable energy sector, promising to reduce global reliance on fossil fuels. Before this can occur, however, there is a significant amount of troubleshooting still to be done. As with most nano-products, the impacts of QDSSC on the human and natural environments are still largely unknown and potentially toxic. Another issue is the durability of QDSSC across the weather spectrum. Unlike issues of negative human and environmental impact, which receives very little research funding or government interest, research into the all-weather question is moving along swiftly in answer to commercial needs.

While quantum technology is applied for the purpose of augmenting the amount of energy that can be harvested from solar radiation, it is also being explored as a method to capture what is referred to as “wasted energy”. Wasted energy is the name given to infrared energy produced by the sun that is not absorbed by solar panels or through photosynthesis into useable energy. This unused energy does not disappear but spreads out and is absorbed into the earth’s surface, making it incredibly difficult to collect and use.

By employing a method called quantum tunnelling, scientists have created a proof of concept antenna that can detect this wasted energy in the form of high-frequency electromagnetic waves and transform it into usable energy. Unlike solar panels, this quantum-enabled device could operate 24-hours a day, under any weather conditions. This application of quantum technology presents an entirely new method of energy transfer that would be completely green and again has the potential to revolutionize the renewable energy sector.

While the promise is great, this technology is in its infancy, with many technical problems still to surmount. Still, quantum technology opens many doors into the renewable energy space for technology that holds great potential for the coming years.

Don’t miss the final instalment in our Environmental Security series tomorrow.

Quantum Applications

Can quantum technologies help save the world?


No Comments

Part 1 of 3: Monitoring

Gabriella Skoff

The first instalment in our Environmental Security series examines how quantum sensing can help to better monitor our natural environment, a function for which we already heavily rely upon satellites. Quantum technology promises to extend these capabilities, providing greater accuracy and security. As with all things quantum, the capabilities of quantum sensing applied technologies go far beyond what has traditionally been possible.

Quantum sensing allows us to monitor, detect and study the environment by gathering large amounts of data, enabling us to make more reliable decisions with the vast amounts of information in hand. These capabilities can have a vital impact for disaster preparedness especially, by enabling us to detect even the smallest meteorological disturbances that could ultimately lead to catastrophic natural disasters. Some budding applications in this space include the ability to accurately detect potential earthquakes and volcanic activity.

With other applications that promise to improve telecommunications and navigation, quantum sensing may be first to reach the commercial market. However, with the immense promise this technology brings to the monitoring and sensing of environmental data, it will also bring legitimate threats and challenges. The further development and application of this technology will undoubtedly see higher levels of surveillance, sure to solidify its position as a supremely valued military tool. On the other side of the coin, this same technology will enable quicker and more effective search and rescue procedures in a post-disaster context, natural or otherwise.

The field of quantum sensing or quantum metrology is largely reliant on earth-orbiting satellites for the monitoring, collection and transmission of data. Satellites are absolutely critical to environmental security infrastructure. They are responsible not only for the gathering of key data about the environment such as air temperature, wind, sea surface temperature and soil moisture, but also the monitoring of arable land, deforestation and urbanization. The constant and reliable monitoring of these environmental factors allows populations an increased level of environmental security.

With the advent of the quantum age, satellites—this crucial component in the internet of things—will become vulnerable to hackers and ill-doers. In this future scenario, all data produced by satellites will be susceptible to corruption or complete obliteration. This would have a disastrous impact, not only for issues of environmental security but for our entire infrastructure, including electric, water and transportation. Luckily, another quantum application in the development stage promises to confront this threat. Quantum cryptography allows for a quantum-secure communication, a feat that has already been provisionally achieved by China via its Micius satellite.

Responsible innovation will be paramount in quantum sensing technologies. Satellites have long been considered a security apparatus, but their militarization is only just beginning. In order to ensure that quantum-enabled satellites deliver as much on their promise for environmental security as for military security, it is crucial that their development for this purpose be prioritized and that the full scope of their potential impact be intelligently understood.

Don’t miss the second instalment in our Environmental Security series tomorrow.

Quantum Applications

Can quantum technologies help save the world?


No Comments

A three-part examination of quantum applications for environmental security

Image via United Nations University

Gabriella Skoff

From drinkable water sources to arable land, healthy seas to clean air, reliable rainfall and predictable seasonal changes, humans depend entirely on the environment to provide the resources and conditions necessary for life. When access to vital resources is impeded or weather conditions become erratic, the equilibrium of life becomes unstable. It is little wonder then, that the threat posed to human populations by catastrophic environmental events, the degradation of the natural environment, the impact of climate change, and the growing force of overpopulation, has seen environmental security emerge as a serious priority for national and global governance.

In response to these threats, environmental technologies aim to create solutions to some of the major challenges presented within the scope of food and water security and sustainable energy. Although they appear far less frequently in headlines than some of their flashier applications, quantum technological advances in sensing, communication and computing present promising solutions for issues of environmental security. In light of the great uncertainty that surrounds the qualities of quantum technologies, an ambiguity which often invokes anxiety and fear, it is of great value to explore the positive impacts of quantum innovation that the future may hold.

The multifaceted threats to environmental security outlined above contribute to the disappearance of natural resources and to the advent of more frequent, extreme weather conditions. Indeed, climate change has been identified in the security space as a “threat multiplier” and a “catalyst for conflict”, with the power to destabilize social, economic and political conditions. Environmental insecurities may manifest in food and water scarcity, which can cause the inflation of prices for basic goods, provoke mass migrations, cause civil unrest and incite chaos. These conditions create the perfect breeding ground for conflict.

Likewise, global dependence on fossil fuels, apart from being urgently unsustainable, poses national security threats that have already resulted in war on numerous occasions. While many of the current effects of environmental insecurity are experienced in already volatile or susceptible nations, it is inevitable that these effects will spill over borders and into countries which boast more resources and reliable infrastructure to support climate change refugees and migrants fleeing conflict.

The role of technology in supporting initiatives across the entire spectrum of environmental security is more critical now than ever before. Quantum technologies promise to have an impact in several fundamental areas, including disaster preparedness, monitoring of deforestation and urbanization, green energy and in the creation of predictive climate change models. These applications extend right across the disciplines of quantum sensing, communications and computing.

The potential contributions of quantum technologies for increasing environmental security can be categorized into three main groupings: monitoring, energy and modelling. As with any technology, promise does not come without limitations and risks. While many of the potential quantum solutions for issues of environmental security are in their nascent stage of research and development, it is crucial that these limitations and risks too are understood.

In this three-part series, Project Q examines the bright hopes and the shadowy promises of the quantum applications that could help confront the threats posed by environmental insecurity. Join us over the next three days as we ask the question: can quantum technologies help save the world?