Artificial Intelligence

Artificial Intelligence, Quantum International Relations, Quantum Research

India Races Toward Quantum Amid Kashmir Crisis


No Comments

Amid troubling news of serious human rights violations carried out in India-controlled Jammu and Kashmir—including a debilitating digital blockade lasting over two weeks—Indian Prime Minister Narendra Modi signed an agreement with France for a landmark technological collaboration in quantum and artificial intelligence (AI). The Indo-French collaboration between French company Atos and India’s Centre for Development of Advanced Computing (C-DAC) will establish a Quantum Computing Experience Centre at C-DAC’s headquarters in Pune, India and deliver an Atos Quantum Learning Machine. The high technology partnership, which “advocate[s] a vision of digital technologies that empowers citizens, reduces inequalities, and promotes sustainable development”, sits upon the controversial backdrop of India’s current actions in the Kashmir crisis and presents an interesting view into the intersection of international politics and quantum technologies.

During his first term, Narendra Modi began to position India as a global technology hub, putting its innovation sector on the map by embracing international investment and collaboration. The advancements that have been made over the last five years as a result of these activities have helped to fuel India’s socioeconomic development and cement its place on the global stage as a major emerging economy with a vibrant technology sector. Now in his second term, Modi seeks to apply a digital taxation to global technology giants like Google and Facebook on their activities in India. Though this policy shift has been identified as a potential barrier to Big Tech’s incentive to contribute to India’s start-up space, Modi has nevertheless continued to cultivate a tech-forward name for his government. His “New India” government focuses on sustainable development and emerging technologies, especially agricultural technology, AI and quantum.

Within this context, India’s national quantum technology research and development capacity has blossomed at a rapid pace, especially with regard to quantum mechanical theory and theoretical physics research and software development. However, unlike the top competitors in quantum computing such as China and the U.S., India lacks a strong quantum computing hardware industry, a challenge which could be exacerbated by Modi’s Big Tech taxation policy. In order to supplement research activities in its burgeoning quantum and AI sectors, Modi has instead turned toward collaboration with international governments as a vehicle to boost domestic technological development. For example, India’s recently established fund-to-fund partnership with Japan will support over 100 start-ups in AI and IoT. Likewise, the new Indo-French partnership is a critical piece of the puzzle for India, promising to help boost its national deficiency in applied quantum computing development and help India to become a leader in the quantum space.

With international partnerships playing such a key role in Modi’s plan for the development and growth of India’s quantum computing and AI industries, there is a sense that the country’s actions in state-controlled Jammu and Kashmir are damaging its international reputation. This perspective, however, is demonstrably negated by the signing of the Indo-French bilateral agreement. The agreement, which stipulates French alignment with India as a partner in sustainable development and emerging technologies, outlines the countries’ shared commitment to “an open, reliable, secure, stable and peaceful cyberspace”. It was signed into existence even as India, the world leader in internet shutdowns, enacted a digital lockdown on Kashmir for the 51st time in 2019 alone. This data sits in stark contrast to the stated objectives of the partnership and demonstrates the separation of business from peace-building priorities on an international scale.

The Kashmir conflict, a turbulent territorial dispute between India, Pakistan and China, dates back to the partition of 1947 and has already incited four wars between India and Pakistan. Kashmir, dubbed one of the world’s most militarized zones, is of strategic value to both countries and is India’s only Muslim-majority region. The recent conflict was spurred by a series of brutal attacks and rebellions since February 2019, which ultimately led the Modi government to revoke India-controlled Kashmir’s “special status” of autonomy granted under Article 370 of the Indian constitution. Given this complex history and characterization, India’s fresh assault on the region has led many (including Pakistan’s own Prime Minister) to fear an escalation of violence that could result in a worst-case-scenario nuclear face-off between India and Pakistan.

Whether or not it is representative of the true feelings of Modi’s “New India”, Indian national media has expressed nearly unequivocal supportive of the revocation of Article 370. French comments, however, lean toward neutrality—tactfully holding the situation at arm’s length while urging for a bilateral negotiation between India and Pakistan. Regardless of the two countries coming to a peaceful resolution or not, it appears that international investment in Indian quantum and AI development shall not waver in the face of the Kashmir conflict. Ironically, as India sprints to catch up in the quantum race with the support of France and other international allies, the results of the past technological nuclear arms “race” looms heavy over the continent.

Artificial Intelligence, Quantum Computing

When Quantum Meets AI: PERILS, as Two of our Future’s Most Powerful Technologies Collide


No Comments

Part III of III

Gabriella Skoff

Quantum computing and artificial intelligence (AI) are disruptive technologies. They are set to transform and rupture both industry and societal networks as they challenge and displace current systems. The powerful evolution that will be ignited across disciplines if and when these technologies become fully functional and cohesive is sure to challenge modern hegemony in a number of ways. Some of these changes, as we have discussed in last week’s instalment, could endow us with the tools to make this world a better and fairer place for people around the globe. Of course, the duality of light and dark also characterizes emerging technologies, and some of these changes could impose powerfully deleterious effects on society. While we have discussed some of the potentially promising applications for these technologies in the future, we also caution that there is no way of knowing exactly how these technologies will evolve, both together and separately. Industry and government cannot ensure that emerging technologies have only positive societal impacts. However, these forces certainly do have some power over the process to leverage benefits in a way that consciously produces positive impacts and opportunities to confront underlying societal challenges.

This final instalment of our quantum-AI series will caution against the type of thinking that views technological advancements as end-all-be-all solutions. Certainly, these emerging technologies present the potential for revolutionary social, economic and environmental benefits. However, if we focus only on the positive impacts these emerging technologies may have, we risk blinding ourselves to the underlying issues that they may confront but cannot solve. In this way, quantum-AI applications could produce either a Band-Aid effect or could even exacerbate certain challenges rather than “fix” them. Rather, with a human-centric approach and a forward-looking perspective, we hope these technologies can be channelled and regulated in ways that work for the benefit rather than the to the detriment of societies across the globe.

Towards a More Objective AI?

When we talk about a quantum boost to AI development it is imperative that we consider the current challenges facing machine learning today and how these might be impacted by the accelerated development of AI technologies. In last week’s instalment, we explored how quantum-powered AI might enable machine learning algorithms to create a more nuanced output from more complex data sets. While this reality may ultimately manifest, it does not guarantee that enhancing AI’s capacity to process larger and more complex data sets will fix its bias problem. That is because this problem is not wholly a technological one, but mostly a human one.

A recent MIT Technology Review article dives into this issue, citing a new report by the AI Now Institute, which explores AI’s ongoing “diversity crisis”. The report presents some staggering figures with regard to gender and racial diversity in the AI industry and argues that there will be no solution to AI’s bias problem without a monumental shift in these demographics. Across both industry and academia, the percentage of Black and Latino contributors to the field of AI is no higher than 5% and the number of non-male contributors sits below 20%. With such a homogenous group of people writing AI algorithms, it is no wonder a bias exists in the technology.

Some powerful examples of the negative social impacts that have been produced through the application of AI are given in the aforementioned article: “Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumésperpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.”. Clearly, these biases are not harmless and have already manifested negative social impacts across a number of sectors. This is a challenge that quantum-boosted AI will not be able to confront. If we believe that a higher-powered AI that is able to process vast amounts of data will alone solve AI’s bias problem we are severely mistaken. Ignorance of these underlying, systemic issues will only serve to deepen divides and reaffirm inequalities.

This debate challenges the misconception that technology exists independent of society. Originating in the early nineties with the advent of the Internet, the idea that cyberspace would be an egalitarian realm free from notions of race, gender or identity is a belief that is still cemented in the human psyche. Though it is increasingly challenged today, the results of this belief are instituted throughout the culture and practices of the technology sector. Long-term systemic change is what will be required to displace this dynamic: From a shift in male-dominated workplace culture to hiring, retention and workplace support practices for racial and gender minorities. The journey to diminishing AI’s bias problem is inherently tied to the achievement of far more diversity and better representation in AI technology development.

Impacts in the Medical Sector

Quantum-powered AI technologies promise to revolutionize healthcare systems across the globe in a number of ways. But will the further automation of healthcare services be a benefit or a detriment to patients?

Recent years have seen health care systems across the globe move towards an approach that seeks to maximize efficiency. This results in patients receiving less of their doctor’s time, which is instead increasingly demanded by their computer’s reporting and diagnostic systems. Likewise, changes in insurance policies over the last decade have led to the standardization of less patient-doctor contact. A 2017 systematic review of 67 countries notes that already, “For half of the world’s population, primary care doctor visits last less than five minutes.”. This, in turn, and coupled with the boom of Big Pharma, has resulted in most doctor’s visits ending with a prescription in hand, at least in the United States.

Certainly, in the United States, the prescription of drugs is the most viable outcome from a GP visit where the patient is only allotted an average of 15 minutes of the doctor’s time. This reality is already a serious challenge that has no doubt created negative societal impacts. One such example is the vicious opioid crisis currently plaguing the country, due largely to the over-prescription of opiates. While this is only one example from one country, it illustrates that the situation that has arisen in primary healthcare across the globe is extremely complex and warrants specific consideration before current systems are advanced with new technologies.

Quantum-AI technologies promise to further increase the efficiency of these systems, but there is sufficient reason to pause and ask ourselves what role this may or may not play in actually improving health outcomes for patients. The increased automation that may result from the confluence of quantum computing and machine learning could serve to further entrench healthcare systems in this fast-food style of primary care. It is not difficult to imagine how this could influence a further reliance on the diagnosis of medical conditions linked to an automated prescription of drugs. This could only serve to move us further away from a holistic approach to healthcare, in which doctors spend more time trying to understand a patient’s specific circumstances in order to recommend not only drugs but also lifestyle changes in areas such as exercise, diet and sleep.

In order for the future of quantum-AI systems to function in harmony with human practitioners, a deep understanding of the intricate complexities of the current state of healthcare and the directions in which it is moving is entirely necessary. Likewise, other considerations of value when debating the topic of the role of these emerging technologies in healthcare systems should include: the cost viability of implementing new quantum-AI systems uniformly across healthcare providers, the difficulty a GP may have in deciphering diagnostic reasoning of AI systems, and the potential for privacy and ethical issues with regard to the storage of patient data, especially in the context of quantum encryption and hacking.

Impacts in Business and Government

Traditionally, the implementation of high technology in business has been slow to take hold. However, current trends point to a shift in this practice. According to Forbes: “For the first time ever, emerging technology investment has superseded customer understanding and the No. 1 thing firms want to do more to be innovative.”. As the pace of emerging technology adoption in business accelerates across sectors, there is a fundamental need to apply a human-centred policy approach to the implementation of quantum and AI technology in order to keep humans “in the loop”.

Technological innovation in and of itself is of little value to a business. Emerging technologies like quantum and AI do not exist in a bubble apart from society, but rather function within human-built and run systems and networks. As such, if an organisation is not flexible and adaptive enough to accommodate a mass technological shift, such as the incorporation of quantum-powered AI into financial modelling systems, the implementation may cause more problems than benefits. Again, while this challenge is specific to the implementation of new technologies, it is a people problem, not a technology problem. If the organisation itself is not adaptive enough then these implementations could be not only detrimental to the culture and wellbeing of employees but also potentially fatal to the business itself.

As with business, government systems too will need flexibility and agility in order to adopt emerging quantum-AI capabilities. Government agencies are bureaucratic and hierarchical, which can make emerging technology implementation especially challenging and time-consuming. As we have previously covered, the United States DoD has been particularly slow on the uptake of quantum technologies. When businesses and governments alike begin to consider the implementation of quantum and AI technologies, they should do so with the foresight and human-centric approach necessary in order for such system changes to take hold and function effectively. While it is challenging to regulate emerging technologies before they are fully functional in applied settings, there is a need for a strong and proactive government foresight into how quantum and AI will develop, both in tandem and in isolation of one another as well as within existing structures and systems.

Whose Role is it?

In the case of the informed and proactive regulation of emerging technologies, responsibility and control fall largely upon the shoulders of two powers: industry and government. Due to the fundamental role that the private sector plays in the development of quantum and AI, the United States provides an interesting case study. As we have debated previously on the blog, the United States’ heavy reliance on Silicon Valley as its technology development machine, especially for defence purposes, comes with a host of specific challenges and advantages.

Unlike in China, where national technology investment strategies are extremely well-funded and seamlessly incorporate the power of an utterly nationalistic and committed private sector, the United States’ approach leaves far more room for dissonance. Notoriously anti-establishment and globalist, Silicon Valley produces companies like Google, Amazon and Microsoft where grassroots individualism often manifests as a central business characteristic. In comparison to China’s strategy, this creates a contentious and sometimes tumultuous relationship of dependence between the United States Department of Defence (DoD) and these tech giants, who it is hoped will deliver on national defence contracts.

This challenge has been dubbed by those who fear this dynamic will harm United States security interests as Silicon Valley’s “techno-moralist” problem. But while this dynamic may appear as a challenge for the DoD in the race to achieve dominance over quantum and AI development, it presents an opportunity for industry to help bring the crucial lens of ethics onto the issue, which could serve a critical role in helping to regulate the development and fusion of these two technologies.

From a top-down perspective, the United States government has recently taken interest in the development of what is commonly known as ethical AI. In February of this year, a resolution, “Supporting the development of guidelines for the ethical development of artificial intelligence,”, was introduced in the White House. However, this high-level conversation around the ethics of AI has yet to extend to quantum. This is due to many factors, including the public’s lack of engagement with the topic of quantum technologies, which can be challenging to understand in the abstract. Nonetheless, the government of the United States’ policy formation around the prospective social impacts of AI applications lends hope to the development of a similar approach to quantum technologies.

The Defence Advanced Research Projects Agency (DARPA) is the agency of the  United States’ DoD responsible for funding and coordinating the development of emerging technologies for military applications. It is also the biggest funder of basic science in the U.S.. While DARPA pours billions of U.S. dollars into the development of emerging technologies, funding for research into the social impact of these is severely lacking. As we have urged previously, “social impact research has the potential to make systems run smoother, more ethically and sustainably, giving an advantage to a science and technology environment that takes a long view rather than a short-term perspective.”. Further, the DoD’s reliance on the “techno-moralists” to deliver the technological advancements needed to maintain U.S. military supremacy also presents an important argument for a shift in this imbalance. Given the context of Silicon Valley’s tech culture, this is not a debate that will be won by DoD money or muscle. Rather, it is one that must appeal to a sense of ethics. This can only be done by government demonstrating that there is a strong commitment to understanding the social impacts that the future applications of quantum-AI technologies will undoubtedly produce.

 

 

Artificial Intelligence, Quantum Computing

When Quantum Meets AI: PROMISES, as Two of our Future’s Most Powerful Technologies Collide


No Comments

Part II of III

Gabriella Skoff

Project Q seeks to perforate the silos built up around disciplines. We believe that the nature of science and technology is inherently social, and we strive to explore the ways in which these disciplines are cross-pollinated and how they intertwine. Likewise, the intersection of artificial intelligence (AI) and quantum physics, stemming from the disciplines of computer science and physics, respectively, demonstrates the value of building bridges between academically separated fields. This fusion presents benefits for both ends of the spectrum, promising to have an impact on the ways in which we understand and apply both quantum technology and artificial intelligence.

The emerging, interdisciplinary field of quantum machine learning is in a truly nascent stage of development. However, as demonstrated by the research cited in the introduction to this series, researchers are not waiting around for the full realization of a quantum advantage to begin testing these experimental waters. The results from early studies, using relatively low-powered quantum computers and machine learning systems, certainly lend support to the hype promising that together these technologies will make a big impact on our future. That being said, while these technologies are expected to affect a wide range of industries, there is no way to be sure of exactly how they will develop, both separately and in conjunction. The only surety is that there will undoubtedly be unexpected outcomes produced by the fusion of quantum and AI. Still, the big names in both quantum and AI development, including NASA, IBM and Microsoft, race ahead to develop these technologies for commercial and state-sponsored use.

Dual Promise

On a fundamental level, machine learning could contribute to the development of quantum computing itself. One emerging application for machine learning in quantum computing is in the formulation of complex quantum algorithms. This is a formidable challenge in quantum computing that could be confronted with the help of machine learning, in order for quantum algorithms to be designed and learned by quantum computers themselves rather than composed through the time-consuming effort of the few quantum experts. By using a physics-oriented approach to intelligent machines, complex, quantum-entangled experiments are already being designed by researchers working at the intersection of quantum and AI. Early examples of quantum experiments like this one, enabled by the power of machine learning, demonstrate the potential for AI to play a vital role in the development of quantum computing.

The promise of this application for machine learning in quantum computing is best summed up by Dr. Irfan Saddiqi, of Berkley’s Quantum Nanoscience Laboratory: “Can we teach a machine quantum mechanics? Can a machine learn the rules of quantum-mechanics? The answer is, absolutely.”. Saddiqi and his colleagues use recurrent neural networks (RNN), a specific type of AI algorithm that mimics the biology of the human brain to make sense of large data sets by identifying patterns from the data itself. While it should be noted that RNN cannot actually understand quantum computing in the way that a human brain can, it can produce computational results that can serve as a valuable tool for researchers. In this way, machine learning could be of important value to quantum computing by making sense of the vast amounts of data produced by quantum computers. Some predict that AI technology will eventually become a standard feature of quantum hardware for this reason.

Perhaps the most anticipated manifestation of the amalgamation of these technologies, however, comes in the form of a quantum boost to AI. Scientists are beginning to explore the question of how quantum computers could be used for intelligent data analysis. According to Canadian quantum computing company Xanadu, the potential areas of impact for quantum computing in AI include optimization, linear algebra, sampling and kernel evaluations. These four specializations of quantum computing could accelerate the development of AI to be more effective and efficient. Examples of use for these quantum capabilities in machine learning include, respectively: helping AI to evaluate hard-to-compute functions, exponentially speeding up the processing power of AI, producing data samples to train AI on, and augmenting certain functions of classical computers, such as the use of kernel methods, which can be difficult to compute classically. Today’s quantum computers are well-adapted to enhance these specific functions in AI, especially in cohesion with classical computers, as they are mostly special-purpose built devices at this stage in development.

Toward a More Objective AI?

As previously discussed, new research into the feature mapping capabilities that quantum computing could perform should enhance the sophistication of data mapping that AI is capable of. This illustrates the potential for AI powered by quantum computing to surpass its current levels of efficacy, by identifying patterns in data that are not visible to classical computers. According to IBM researchers: “AI systems thrive when the machine learning algorithms used to train them are given massive amounts of data to ingest, classify and analyze. The more precisely that data can be classified according to specific characteristics, or features, the better the AI will perform.”. A critical role that quantum computers could play in AI as their quantum volume increases, is to enable the computation of far more complex and vastly larger data sets through feature mapping. In less technical and more theoretical terms, perhaps this quantum capability will be able to influence more objective and less biased AI output, depending on the breadth and quality of data it is trained on.

The wisdom of the crowd theory, popularised in the Information Age by citizen-powered journalism sites such as Wikipedia or Reddit, underpins these website’s success in producing surprisingly accurate and up-to-date information, given that most contributors are not “experts”. As with machine learning, these websites are only as good as their data points in quality and in numbers. The wisdom of the crowd theory dictates that the larger and more diverse the pool of contributors producing information for any given article, the higher the levels of accuracy tend to be.

Quantum computing could offer machine learning a similar advantage by enabling a higher level of computational power to deal with a larger and more diverse sample of data points. It would be able to lend this advantage because the quantum properties of particles theoretically exist in a near infinite quantum state of all possible realities. The power of quantum computing lies in its ability to inhabit this quantum space, opening up the possibility for a far greater scale of computing than conventional computers can offer. With regard to machine learning, this means that patterns and features that AI algorithms look for in a data set could be sorted at a much faster pace with more efficiency than currently possible. While this theory has not been tested, perhaps quantum computing could help with AI’s ever-present bias problem, by enabling machine learning to process and identify patterns in larger and more complex data sets. Further, there is hope that this quantum-endowed capability, if realized, could allow for a more nuanced reading of data and lead AI systems to new findings.

Impacts in the Medical Sector

One domain where the ability of AI to pick up on subtler patterns from larger data sets, thanks to a quantum boost, could have an especially positive impact is in the medical field. There is significant hope amongst scientists that quantum computing and AI systems will revolutionize the medical sector through drug discovery, disease screening and by supporting clinical diagnosis and decision-making.

A controversial topic of discussion, it has now been shown that AI can make more accurate medical diagnoses and predictions than some doctors. While some may fear that this information will eventually lead to AI replacing doctor’s jobs, the far more likely outcome is that AI could function in support of them. In fact, AI is well-suited for the role of diagnosing, a task that involves scouring information for patterns and drawing conclusions from these.

Already, computer systems play a distinct role in this practice. Diagnosing systems, known as clinical support systems (CDSSs), analyse medical data to assist in clinical decision-making. AI systems powered by quantum computing could greatly enhance the efficacy and speed of such systems, providing vital support to physicians in diagnostics and leading to better health outcomes for patients.

Quantum-powered AI could also play a significant role in drug discovery. Currently, it takes on average 12 years and $2.7 billion USD to get a new drug to market. This is an issue currently being tackled by start-ups like GTN, a company that is working to combine the power of quantum computing and AI in order to cut those numbers in half. Co-founder and CEO of GTN, Noor Shaker, has vowed to solve this problem by combining the forces of these two technologies in order to improve access to new drugs, especially in developing countries.

In the case of GTN’s technology, quantum computers, which are able to conquer the complexity of chemistry problems, produce simulations of chemical interactions and feed that information into a machine learning algorithm. The algorithm can then analyse the data and search for patterns that will lead to an understanding of the compounds. According to a feature by MIT Technology Review: “Thanks to the combination of both technologies, the GTN LTD software detects new molecules that have the potential to behave like current drugs. The company’s algorithms simulate, filter and select molecules that were virtually invisible to convert them into different medications”.

Examples like this one demonstrate how the novel use of these two technologies could revolutionize the pharmaceutical industry, making the drug discovery process more efficient, quicker and cheaper. The knock-on effects of producing applications like this for quantum-AI technologies could have a significant impact on improving access to drugs and getting better drugs on the market faster, making life better for those living both in developed and in developing countries.

Impacts for Weather Modelling

The quantum-AI revolution is also set to make waves in precision weather-prediction capabilities. Accurate prediction of weather forecasting impacts a huge number of sectors across the globe. From business to agriculture to government, the ability to accurately predict changing weather patterns is of critical importance to society. Currently, the quality of weather forecasting is highly variable, dependant on the ability to collect and make sense of the mass amount of data produced by satellites. With the support of future quantum computers, AI systems could better analyse meteorological data in order to make clearer and more reliable weather predictions. This could lead to higher levels of disaster preparedness, especially in the case of more precariously-located developing countries, and this proliferation of more accurate information could help to improve more sustainable farming practices.

A far more insurmountable challenge in weather forecasting manifests in the long-term. Even on a very small scale, about 90% of five-day forecasts are accurate, compared to the approximately 50% accuracy rate for ten-day forecasts. Looking even further into the future, accurate weather forecasting is more or less impossible. Quantum and AI, however, could present a solution to this problem. By improving our ability to produce and make sense of vast amounts of data, quantum and AI systems could have a huge impact on our ability to create weather-forecast models far into the future. As we have previously covered, the improved ability to create accurate, long-term weather modelling could have a serious impact on our ability to verify the discourse on global warming, potentially prompting society to take action against climate change at a scale far more radical and quicker than what we are currently motivated to do. For many, something as intangible as climate change is only real once seen—perhaps the collision of quantum and AI in this space could endow us with that critical foresight before it is too late.

Impacts in the Financial Sector

In the banking sector, fraud detection techniques are already algorithmic. Classical computers, however, are slow at this task and require an abundance of power in order to perform the complex mathematics needed to detect fraud. Machine learning, driven by imminently more powerful quantum computing, could pick up on patterns and spot fraudulent activity with far greater ease and efficacy than what is currently possible using classical computing. Likewise, quantum algorithms based on AI systems are set to transform current trading practices through their joint optimization capacities. Quantum-powered AI systems could potentially play a strong role in identifying trading opportunities algorithmically, without the need for human input.  

Of course, any time that the need for human input is lost, panic will rise. The fear the AI technologies will cause mass job loss across a number of sectors should not be ignored, but it should not incite panic either. Understanding the duality of the potential powers and threats posed by the fusion of quantum-AI could enable proactive policy, should governments invest the effort into understanding the social impact that these emerging technologies are sure to have.

Join us next week, as we take a deeper look into the nuances of these promises and explore the ways in which improper harnessing of these technologies could produce negative effects in our world.

Artificial Intelligence, Quantum Computing

When Quantum Meets AI: Promises and Perils as Two of Our Future’s Most Powerful Technologies Begin to Collide


No Comments

Image via GCN

Part I of III

Gabriella Skoff

The game-changing role that quantum computing is set to play in machine learning is a topic of conversation shrouded in much hype. Theoretically, quantum computing could increase algorithmic capacity to crunch large datasets, enhancing the performance of deep learning and accelerating the progress of artificial intelligence (AI) much more quickly than what is currently possible using conventional computers. While scientists have theorized that the power of machine learning could be exponentially enhanced by quantum computing, it has generally been believed that current quantum technology has not yet reached the level of maturity needed to lend this essential boost. New research, however, presented by a joint team of researchers from IBM Research, MIT and Oxford offers some experimental proof to back this theory.

The recent study published in Nature, entitled “Supervised learning with quantum enhanced feature spaces”, demonstrates that currently available quantum computers can enhance a certain type of machine learning known as feature-mapping. While the researchers acknowledge that we still have far to go before we achieve quantum advantage for machine learning, they have high hopes that the feature-mapping method could ultimately enable computation of far more complex data sets than currently possible. This news indicates that perhaps quantum and AI will collide much sooner than expected. 

According to Dr. Jerry Chow, Manager of Experimental Quantum Computing at IBM: “[The researcher’s] approach to quantum machine learning provides us with a path to understanding in which way even noisy, intermediate-scale quantum computers can outperform classical machine learning algorithms”. The team of researchers were able to achieve nearly perfect classification of their input data using a two-bit quantum computing system. These results suggest that quantum computing is likely to have a major impact on machine learning, improving the performance of machine learning to a speed and efficacy on a scale much larger than ever before, sooner rather than later.

Still, reaching a fully conscious AI is a far-off goal, if it can ever be achieved, and fully functional quantum computers may not sit just around the corner either. Regardless, researchers hope that quantum computing will speed up the process and help us to create better AI, faster. At the same time, AI is well-suited to assist in the development of quantum systems. It can play a critical role in helping scientists to make sense of the vast amounts of data it can produce by identifying patterns and creating systems to understand it at a rate far faster and more efficient than human beings are capable of.

As indicated by the aforementioned research and by the quantum-community at large, the quantum-AI horizon draws ever nearer. Within this context, it is essential to explore how this fusion of two of the world’s most powerful technologies will impact our lives, as it surely will. This discussion, not only confined to the topic of quantum and AI but to all emerging technologies, is characterized by a tendency toward polar extremes. The resulting debate is often charged either with fear-based language or by exaggerated promise and hype. This investigation seeks to explore the space in between, in hopes of promoting a more measured and nuanced approach to discussing both the promises and perils of the quantum-AI nexus.

Quantum computers remain a bit of a unicorn concept to many, an elusive idea we hear much about but understand little of. While many have heard that quantum computing should positively impact a number of sectors through, for example, drug discovery, climate forecasting and financial modelling, most of us would not know how to describe what a quantum computer even looks like. The reality is that this will not change for a long time. The conditions needed for the functioning of a quantum computer, including a lack of environmental noise interference and a perfect zero-degree temperature, requires the highly controlled environment of a laboratory setting. As such, quantum computers will not replace personal computers any time soon. However, the impact they will have and in fact are already having even before quantum supremacy has been reached, will affect us in many ways.

AI, on the other hand, is something that most of us already interact with on a daily basis. Many understand that the capabilities of AI are only as good as the amount and quality of data an algorithm can crunch. Companies like Google have been feeding algorithms mass amounts of data for the last couple of years in an effort to boost its capacity to form “real” human interactions. Did you think that the predictive text function on Gmail was only introduced to help you write emails more quickly? Think again. And Google is far from alone in this practice. Most companies and even many governments collect our data, which in most cases we consent to giving them. Society has grown comfortable with this thoroughly proliferating phenomenon, believing that the information captured will be applied to harmless ends, such as improving the user experience. While this is partly true, it is important to understand that the trajectory of most technologies is not contained within their declared use-value.

For this reason, we seek to explore the scope of quantum-AI fusion from a social impact perspective. The path to reach this destination as well as our resultant arrival, presents a host of ethical challenges to confront. At the same time, these technologies promise to revolutionize a diverse array of systems and practices across a broad range of disciplines that could make this world a better and fairer place for many. The following segments will explore some of the possible positive and also the potential negative impacts that the merging of quantum and AI could unleash. In doing so, we hope that the entire spectrum of these emerging technology’s social impact will be considered. With the understanding that in order for the peaks of advancement and positive impact to be reached, the troughs too must be thoroughly investigated, we present an exploration of where we could channel and regulate these technologies in order to boost the positive and diminish the negative impacts they will produce.

Join us next week as we publish the first follow-up in this series on the promises of quantum-AI.

Artificial Intelligence

Moving Beyond Ethics in Data Science


No Comments

Image Credit: Centre for Translational Data Science, University of Sydney.

Alexander Vipond

At the recent Ethics of Data Science conference, hosted by the Centre for Translational Data Science at the University of Sydney, an interdisciplinary panoply of software engineers, machine learning experts, clinicians and lawyers came together to discuss how artificial intelligence and big data is changing society.

What quickly became clear was that technological advancement in the field is moving so fast that participants were grappling with not only the recent and future impacts on their industries but the sheer pace of technological change itself.

Some presenters argued that the recent talk of ethical AI principles from big tech companies was merely a form of ethics washing, a strategic ploy to delay and weaken regulation on privacy, dangerous content and data rights. Other speakers opined that ethics were simply not enough: that in order for them to be of real value to society we need to move beyond ethical principles to enforceable laws, changes to organisational cultures and clear regulations.

Many of the legal experts in attendance outlined the knowledge gap between technologists and other parts of society, citing the need to properly educate judges, policymakers and politicians on AI so they can make informed decisions. These arguments highlighted the Australian Government’s recent push to strengthen penalties for companies who breach privacy regulations, accompanied by an increase in funding for the Office of the Information Commissioner to pursue data breaches. The recent acknowledgement of Attorney General Christian Porter, as well as by panelists at the conference, that Australian data laws are insufficient to protect citizens in the current environment led to many proposals for change.

These included Australian states joining the European Union’s General Data Protection Regulation and adopting international human rights law as a framework for wider regulation of emerging technologies. There was also a concerted focus on how to protect marginalised communities most at risk of exploitation. For example, many presenters noted algorithms that reinforced racism in US prison sentencing or sexism in recruitment practices. 

On this front, many of the technical presentations delivered a variety of methods to ensure greater fairness in the design process of machine learning algorithms and outlined the important technical limitations and trade offs that needed to be considered when companies want to harness the power of artificial intelligence. The difference between ethical principles and the formal mathematical models used to embed them in tech, the types of questions machine learning can and can’t answer, and how to reduce bias in data-sets gave the interdisciplinary audience a display of the improvements that could be made with creative thinking and a consideration of a broader worldview.

This gave rise to questions of how to address inclusiveness in the industry and the geopolitical spectre of business and state-based competition. For while this has led to huge investment it has also prompted a new technological race, the consequences of which must be balanced so that positive breakthroughs for society can be maximised and risks can be addressed. The foundation of clear laws and a national strategy on AI in Australia (with funding to support implementation) are yet to be laid. The conference gave participants a window into what organisational coordination and creative solutions could be embraced with strong leadership from government and industry.

The author would like to thank Dr Roman Marchant, Professor Sally Cripps, Professor Nick Enfield and the Advisory board for organising the conference.

Artificial Intelligence

The Robots in Your Supermarket


No Comments

Jayson Waters

AI and other algorithmic technologies have long played a major role in global society and governance. We have previously explored how ‘dumb’ AI supports and maintains fundamental infrastructure and services. In an interesting turn that will impact your shopping basket more than warfare, supermarket chains around the world have begun implementing AI enabled facial recognition and tracking in their stores.  

According to a recent article by Bryan Pearson published in Forbes, stores such as Walgreens, Guess, and Kroger are using AI systems to tailor and target ads to customers. Unlike traditional sales database systems that can only track individual products, AI systems can track purchasing patterns and trends en masse. In turn this information can be used to better determine the susceptibility of customers to various ads and ultimately adapt the shopping experience – from lighting to pricing – to individual users in-store.

Guess and Alibaba have teamed up to create a flagship smartstore in which everything from mirrors, clothing racks, and fitting rooms will have embedded AI. According to Edward Park, senior vice president at Guess North America, “Every item is enabled with Bluetooth low-energy chips, RFID and motion sensors, which enable all of our inventory to be tracked and analyzed.” This data, if analysed properly will also allow Guess to monitor traffic patterns and customer interest in items.

On the plus side AI technologies could allow companies to better predict which products will appeal to customers and thus avoid waste by not producing certain items. Alternatively, the greater depth of knowledge regarding customer preferences may allow advertisers to influence customer choices to a greater degree than ever before.

See here for the full article.

Artificial Intelligence

Words of Consequence


No Comments
Many hands make light work when AI can automate content generation. Image credit: Ghost in the Shell (2017)

Alexander Vipond

OpenAI, a New York based AI research organisation, has created a new text generator called GPT-2. The transformer based AI language modelling system is capable of generating large amounts of high-quality synthetic text, perform reading comprehension, machine translation and summarisation in seven languages.

GPT-2 is designed to predict the next words that come from a text sample. Trained on 8 million webpages from the internet, it utilises zero-shot learning, which means that it can extrapolate short text samples into full articles based on entirely new information

As a general-purpose system, GPT-2 has been able to beat several records held by different domain specific language modelling systems and significantly lowers the complexity and cost of deploying scalable and customisable text generation.

Here’s an example:

Sample prompt: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

Model Completion (machine-written, first try)

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.”

The system constructs a story, creating fake institutions, places and quotations; blending them to produce the illusion of authenticity. The model isn’t perfect and it may take several tries to get a human quality tranche of text but the speed and scale of the system are such that OpenAI has withheld the full system from publication.

This is because GPT-2 could be used by malicious actors to commit large scale information warfare and undermine the one of the central principles of the cybersecurity triad: the integrity of information.

OpenAI has acknowledged this threat, citing GPT-2’s capacity to generate fake news en masse, impersonate others, automate fake content, phishing and spam. Through tweaking the system one can produce infinite positive or negatively angled articles. It is also possible one could customise it for specific issues to improve the veracity of supporting information in the synthetic content it produces, making it all the more difficult to tell fact from fiction. OpenAI have stated they expect tools like GPT-2 to be available in the next two years.

As dictatorships and authoritarian regimes actively seek to spread misinformation to disrupt elections, obfuscate wars, and insist assassins prefer to spend their time admiring English churches, GPT-2 is a highly attractive tool and a warning of what’s to come.

The malicious use of AI tools will challenge the integrity of the global digital commons, fuelled by states who view the open flow of information as a threat to their governance. The tools will then be passed down to organised crime and developing regimes. As the recent case of Project Raven shows, even as countries are increasingly trying to secure their intellectual property; their cyber tools and tactics are up for sale.

As William Gibson once said “the future is already here, it’s just unevenly distributed”. So now that we know the threat is here, what can we do to counter the risks at the different levels of its distribution?

OpenAI will continue their research.

Artificial Intelligence

The Making of Ethical Machines


No Comments
BLADE RUNNER 2049Niander Wallace pondering the creation of life altering new technologies in Blade Runner 2049. Image credit: Sony Pictures/Warner Brothers.

Alexander Vipond

Researchers at MIT have undertaken the world’s largest survey on the moral preferences of people to different variations of the trolley problem. The trolley problem’s basic premise is this: a vehicle is about to have an unavoidable accident and the driver must make a choice as to who or what the vehicle hits e.g. swerve right and hit a young man or swerve left and hit two old people?

Edmond Awad and his team collected over 39.6 million decisions from 233 countries through a specially designed mobile game and website. The game and website asked participants to weigh the ethical issues of different versions of the trolley problem according to 9 life indicators (which can be seen in infographic b below). Previously, most studies have relied on single indicators such as a preference for saving many lives over one rather than attempting to look at the complex interrelationships of multiple indicators. From the responses, the researchers were able to discern large scale patterns and trends from 130 of the countries to identify peoples’ key ethical preferences for the preservation of human life.

Moral Machine infographic

Hierarchical clusters of countries based on average marginal causal effect. One hundred and thirty countries with at least 100 respondents were selected. The three colours of the dendrogram branches represent three large clusters—Western, Eastern, and Southern. Country names are coloured according to the Inglehart–Welzel Cultural Map 2010–2014. Image Credit: Awad et al in Nature, ISSN 1476-4687.

They discovered three different ethical worldviews: The Eastern, the Southern and the Western (as displayed in infographic A). These groups agreed on some basic principles and diverged on others. They shared three major preferences. That young people should be spared over others, that many people should be spared over a few and that humans should be spared over other species. These preferences traversed different cultural, economic, political and religious boundaries.

However, as you can see in the radar plots of infographic b, Eastern, Southern and Western views also express sharply different preferences across the spectrum of the nine life indicators. The Western view skews towards saving the young, the many and taking no action at all, giving the choice to chance. The Eastern view skews towards saving the lawful, humans and pedestrians whilst the Southern view prioritises women, the young and high-status individuals.

Machines are on the verge of being programmed to make life-altering choices, a turning point in history. The questions Awad’s team raise over whether universal machine ethics are possible and whether societies can reach consensus over the use of intelligent technologies are a crucial step in discussing what sort of world we want to live in as we undergo the Fourth Industrial Revolution.

While the world is focused on the threat of killer machines on the battlefield, machine decision-making will pose challenges in times of war and peace. This research tests the limits of universal standards as country specific preferences emerge from the complexity of weighing multiple factors. The scalability of new intelligent technologies may be limited by their adaptability to different cultural environments with varying ethical standards. Geo-strategic tensions and ethical dilemmas over who has the power to control these choices, the diversity of datasets used to make technology and the research used to justify life altering choices will affect company, consumer and government.

For example, moving to a different country in the future may mean moving to a set of new technological moral compasses which will have different criteria, levels of access and personalisation, dependent on the rules of the society.

The Moral Machine experiment is only a snapshot in time; a poll of preferences that remains fluid. Ethical standards will require sensible discussion and update periods to reflect changes in the community. Awad notes that the situations presented rely on 100 per cent certainty of the events occurring and 100 per cent certainty of recognising the targets. In the real world there is a much greater level of uncertainty in these processes.

Beyond this lies the extreme technological challenge for engineers and scientists of how to weigh the vast array of preferences with any semblance of granularity. Can your car accurately evaluate someone’s societal status in the 3.2 seconds before a crash? That technology has yet to arrive. However, in some countries the autonomous car might link with the mobile phones of surrounding pedestrians and choose the person with the lowest social credit score by proxy.

As intelligent and networked technologies continue to develop and impact our lives they will increasingly become imbued with formalised versions of the rules that govern our societies. The collective may gain power over the individual. What we have previously left to chance and split-second decision making, we will now expect to be pre-programmed with precision into machines. As Awad’s research shows countries and communities need to start national and regional conversations about what should be delegable to an autonomous machine and how it is operated, before it is decided for them.

 

 

 

Artificial Intelligence

Facial Recognition in Democracy


No Comments
Smart contact lens based facial recognition, one possible future. Glasses based technology is already in use. Image Credit: Mission Impossible 4, Paramount Pictures.

Alexander Vipond

Microsoft President and Chief Legal Officer Brad Smith has called for the United States government to appoint a bipartisan expert commission on how to regulate facial recognition technology. In a blog post on the company’s website, Smith responded to allegations that Microsoft had provided the US Immigration and Customs Enforcement (ICE) with facial recognition technology earlier in January this year. The recent of role of ICE in the deportation and separation of immigrant families in the US has drawn large scale protests and widespread backlash across the country.

The possibility that this technology could have been used for such purposes saw Microsoft in the firing line, frantically seeking to explain the nature of their contract. Smith clarified that the contract is limited to “supporting legacy email, calendar, messaging and document management workloads” and reiterated that Microsoft does not support the policy.

This became the catalyst for Smith to call for the regulation of facial recognition technology. He posed two scenarios for the future use of the technology one positive, one negative.

Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.

The problem is those futures have already arrived.

In June, Jarrod Ramos murdered five journalists at the Capital Gazette in Anne Arundel County, Maryland over reporting of his criminal past. As he attempted to exit the scene by blending in with people fleeing, he refused to provide identification to local officers and fingerprinting failed. He was identified with facial recognition technology from a collection of state databases.

In non-democratic countries, governments are already rolling out persistent large scale surveillance of citizens using big data and facial recognition. And while the stores at the shopping mall aren’t sharing with each other yet, they have been deploying dual use facial recognition technology to prevent shoplifting and monitor customers’ in-store browsing habits to increase sales.

So, the future is already here. What do we do about it?

Behind the facial match is the data. Who has access to it? How and why is it used? In what situations can different levels of government employ the technology and what restrictions are there on private sector use? How long is biometric data kept for and can it be kept safe?

These are basic questions currently left unanswered by US law and that of many countries around the world. Smith himself raises a series of ethical issues about the legal standards of facial recognition algorithms. The spectrum ranges from persistent passive collection of everyone’s data, to data which is only used in a ticking time bomb situation. This variance reveals both the scale of the technology and the governance space democracies must define.

As Smith notes, this does not absolve technology companies of responsibility. Microsoft has appointed an AI ethics board to help navigate new technologies and vet potential customers. This intrinsic method of fostering good governance is equally important to regulation (although it cannot replace it) in helping create a culture of ethical technology development. This is because companies can resist regulatory frameworks and view them as a series of loopholes to exploit and costs to be avoided. Both are necessary to create an environment in which the potential benefits of facial recognition technology can be reached without its abuse damaging democratic freedom.

However, the distinct problem of dual use applications of AI poses questions that should be answered before the final sign off. The Intelligence Advanced Projects Research Agency (IARPA) in the US has a set of questions it asks of proposed technologies. Questions such as if the technology is leaked, stolen, or copied, would we regret having developed it? Can the technology be made less prone to theft, replication and mass production? What is your estimate for how long it would take a major nation competitor to weaponize this technology after they learn about it? In light of the geo-strategic implications of AI products, major technology companies should consider these questions and add to them.

This addition should be a set of civil questions focused on how a proposed AI technology operates within a democracy. Questions on how the risk of discrimination could be mitigated. How will this technology anonymise and delete data it collects? Can this technology scale to affect democracy, in what ways? Questions like these could help tech companies balance their responsibility to civil society and the international community. This requires wider societal discussions on what rights citizens need in the digital age. This discussion must occur now, for as we have seen the future is already here. It is from this base of regulation and corporate governance that democracies can refine ideas on how to positively contribute to the global debate on AI ethics.

 

 

Artificial Intelligence

Collective Intelligence: DeepMind’s AI plays Quake 3


No Comments

Alexander Vipond

British AI firm DeepMind has a penchant for playing games. From Atari to Go, DeepMind researchers have been training AI agents to compete against themselves and beat human competitors.

The latest challenge was a specially modified version of a gaming classic Quake III A multiplayer, first-person shooter video game from the turn of the millennium, Quake III sees teams of aliens and humans vie for supremacy. Deep Mind have taken this premise to the next level: now it is AI agents versus humans.

Through reinforcement learning, AI agents were tasked with learning to capture the enemy flag from their opponents’ base over a series of procedurally generated environments in which no two games were the same. The aim was to see if AI agents could learn strategies and cooperate together in complex three-dimensional environments with imperfect information, that is, without knowing where the other team’s players are. To do this, Deep Mind created a new type of agent dubbed FTW (For the Win) that could successfully rise above traditional methods of AI competition and exceed human standards.

By training a whole population of FTW agents in two timescales (fast and slow), thereby increasing agents’ consistency, and using a two-tiered rewards scheme to incentivise action regardless of a win or loss, FTW agents were able to learn a higher standard of gameplay. Training a population in parallel not only proved to be more efficient but revealed a diversity of approaches, as agents optimised their own unique internal reward signals.

 

Credit: DeepMind

After an average of 450,000 training games, the FTW agents were placed in a tournament with forty human players who they could play with as teammates or adversaries. In order to ensure a fair fight, researchers engineered balance into the game to counter the AI agents’ reaction time advantage over the human eye’s natural frame rate processing limit (lower than the game’s 60 frames per second).

On average, FTW agents achieved a higher win-rate than human players with strong skill scores. The agents learnt how to follow teammates, defend their base and “camp” at the enemies’ base, picking off players as they spawned. They achieved the greatest collaboration in teams of four, although struggled to maintain this as the number of players expanded.

 

Credit: DeepMind

As usual, it’s not the game itself that represents progress here but the evolving capacity for AI agents to develop cooperative behaviour. DeepMind has demonstrated the capability of AI agents to work in small teams, alongside humans and other AI agents, towards a shared goal. The more AI agents can work together to manage uncertain environments and imperfect knowledge the better they will perform when faced with the chaos of the real world.