Part III of III

Gabriella Skoff

Quantum computing and artificial intelligence (AI) are disruptive technologies. They are set to transform and rupture both industry and societal networks as they challenge and displace current systems. The powerful evolution that will be ignited across disciplines if and when these technologies become fully functional and cohesive is sure to challenge modern hegemony in a number of ways. Some of these changes, as we have discussed in last week’s instalment, could endow us with the tools to make this world a better and fairer place for people around the globe. Of course, the duality of light and dark also characterizes emerging technologies, and some of these changes could impose powerfully deleterious effects on society. While we have discussed some of the potentially promising applications for these technologies in the future, we also caution that there is no way of knowing exactly how these technologies will evolve, both together and separately. Industry and government cannot ensure that emerging technologies have only positive societal impacts. However, these forces certainly do have some power over the process to leverage benefits in a way that consciously produces positive impacts and opportunities to confront underlying societal challenges.

This final instalment of our quantum-AI series will caution against the type of thinking that views technological advancements as end-all-be-all solutions. Certainly, these emerging technologies present the potential for revolutionary social, economic and environmental benefits. However, if we focus only on the positive impacts these emerging technologies may have, we risk blinding ourselves to the underlying issues that they may confront but cannot solve. In this way, quantum-AI applications could produce either a Band-Aid effect or could even exacerbate certain challenges rather than “fix” them. Rather, with a human-centric approach and a forward-looking perspective, we hope these technologies can be channelled and regulated in ways that work for the benefit rather than the to the detriment of societies across the globe.

Towards a More Objective AI?

When we talk about a quantum boost to AI development it is imperative that we consider the current challenges facing machine learning today and how these might be impacted by the accelerated development of AI technologies. In last week’s instalment, we explored how quantum-powered AI might enable machine learning algorithms to create a more nuanced output from more complex data sets. While this reality may ultimately manifest, it does not guarantee that enhancing AI’s capacity to process larger and more complex data sets will fix its bias problem. That is because this problem is not wholly a technological one, but mostly a human one.

A recent MIT Technology Review article dives into this issue, citing a new report by the AI Now Institute, which explores AI’s ongoing “diversity crisis”. The report presents some staggering figures with regard to gender and racial diversity in the AI industry and argues that there will be no solution to AI’s bias problem without a monumental shift in these demographics. Across both industry and academia, the percentage of Black and Latino contributors to the field of AI is no higher than 5% and the number of non-male contributors sits below 20%. With such a homogenous group of people writing AI algorithms, it is no wonder a bias exists in the technology.

Some powerful examples of the negative social impacts that have been produced through the application of AI are given in the aforementioned article: “Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumésperpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.”. Clearly, these biases are not harmless and have already manifested negative social impacts across a number of sectors. This is a challenge that quantum-boosted AI will not be able to confront. If we believe that a higher-powered AI that is able to process vast amounts of data will alone solve AI’s bias problem we are severely mistaken. Ignorance of these underlying, systemic issues will only serve to deepen divides and reaffirm inequalities.

This debate challenges the misconception that technology exists independent of society. Originating in the early nineties with the advent of the Internet, the idea that cyberspace would be an egalitarian realm free from notions of race, gender or identity is a belief that is still cemented in the human psyche. Though it is increasingly challenged today, the results of this belief are instituted throughout the culture and practices of the technology sector. Long-term systemic change is what will be required to displace this dynamic: From a shift in male-dominated workplace culture to hiring, retention and workplace support practices for racial and gender minorities. The journey to diminishing AI’s bias problem is inherently tied to the achievement of far more diversity and better representation in AI technology development.

Impacts in the Medical Sector

Quantum-powered AI technologies promise to revolutionize healthcare systems across the globe in a number of ways. But will the further automation of healthcare services be a benefit or a detriment to patients?

Recent years have seen health care systems across the globe move towards an approach that seeks to maximize efficiency. This results in patients receiving less of their doctor’s time, which is instead increasingly demanded by their computer’s reporting and diagnostic systems. Likewise, changes in insurance policies over the last decade have led to the standardization of less patient-doctor contact. A 2017 systematic review of 67 countries notes that already, “For half of the world’s population, primary care doctor visits last less than five minutes.”. This, in turn, and coupled with the boom of Big Pharma, has resulted in most doctor’s visits ending with a prescription in hand, at least in the United States.

Certainly, in the United States, the prescription of drugs is the most viable outcome from a GP visit where the patient is only allotted an average of 15 minutes of the doctor’s time. This reality is already a serious challenge that has no doubt created negative societal impacts. One such example is the vicious opioid crisis currently plaguing the country, due largely to the over-prescription of opiates. While this is only one example from one country, it illustrates that the situation that has arisen in primary healthcare across the globe is extremely complex and warrants specific consideration before current systems are advanced with new technologies.

Quantum-AI technologies promise to further increase the efficiency of these systems, but there is sufficient reason to pause and ask ourselves what role this may or may not play in actually improving health outcomes for patients. The increased automation that may result from the confluence of quantum computing and machine learning could serve to further entrench healthcare systems in this fast-food style of primary care. It is not difficult to imagine how this could influence a further reliance on the diagnosis of medical conditions linked to an automated prescription of drugs. This could only serve to move us further away from a holistic approach to healthcare, in which doctors spend more time trying to understand a patient’s specific circumstances in order to recommend not only drugs but also lifestyle changes in areas such as exercise, diet and sleep.

In order for the future of quantum-AI systems to function in harmony with human practitioners, a deep understanding of the intricate complexities of the current state of healthcare and the directions in which it is moving is entirely necessary. Likewise, other considerations of value when debating the topic of the role of these emerging technologies in healthcare systems should include: the cost viability of implementing new quantum-AI systems uniformly across healthcare providers, the difficulty a GP may have in deciphering diagnostic reasoning of AI systems, and the potential for privacy and ethical issues with regard to the storage of patient data, especially in the context of quantum encryption and hacking.

Impacts in Business and Government

Traditionally, the implementation of high technology in business has been slow to take hold. However, current trends point to a shift in this practice. According to Forbes: “For the first time ever, emerging technology investment has superseded customer understanding and the No. 1 thing firms want to do more to be innovative.”. As the pace of emerging technology adoption in business accelerates across sectors, there is a fundamental need to apply a human-centred policy approach to the implementation of quantum and AI technology in order to keep humans “in the loop”.

Technological innovation in and of itself is of little value to a business. Emerging technologies like quantum and AI do not exist in a bubble apart from society, but rather function within human-built and run systems and networks. As such, if an organisation is not flexible and adaptive enough to accommodate a mass technological shift, such as the incorporation of quantum-powered AI into financial modelling systems, the implementation may cause more problems than benefits. Again, while this challenge is specific to the implementation of new technologies, it is a people problem, not a technology problem. If the organisation itself is not adaptive enough then these implementations could be not only detrimental to the culture and wellbeing of employees but also potentially fatal to the business itself.

As with business, government systems too will need flexibility and agility in order to adopt emerging quantum-AI capabilities. Government agencies are bureaucratic and hierarchical, which can make emerging technology implementation especially challenging and time-consuming. As we have previously covered, the United States DoD has been particularly slow on the uptake of quantum technologies. When businesses and governments alike begin to consider the implementation of quantum and AI technologies, they should do so with the foresight and human-centric approach necessary in order for such system changes to take hold and function effectively. While it is challenging to regulate emerging technologies before they are fully functional in applied settings, there is a need for a strong and proactive government foresight into how quantum and AI will develop, both in tandem and in isolation of one another as well as within existing structures and systems.

Whose Role is it?

In the case of the informed and proactive regulation of emerging technologies, responsibility and control fall largely upon the shoulders of two powers: industry and government. Due to the fundamental role that the private sector plays in the development of quantum and AI, the United States provides an interesting case study. As we have debated previously on the blog, the United States’ heavy reliance on Silicon Valley as its technology development machine, especially for defence purposes, comes with a host of specific challenges and advantages.

Unlike in China, where national technology investment strategies are extremely well-funded and seamlessly incorporate the power of an utterly nationalistic and committed private sector, the United States’ approach leaves far more room for dissonance. Notoriously anti-establishment and globalist, Silicon Valley produces companies like Google, Amazon and Microsoft where grassroots individualism often manifests as a central business characteristic. In comparison to China’s strategy, this creates a contentious and sometimes tumultuous relationship of dependence between the United States Department of Defence (DoD) and these tech giants, who it is hoped will deliver on national defence contracts.

This challenge has been dubbed by those who fear this dynamic will harm United States security interests as Silicon Valley’s “techno-moralist” problem. But while this dynamic may appear as a challenge for the DoD in the race to achieve dominance over quantum and AI development, it presents an opportunity for industry to help bring the crucial lens of ethics onto the issue, which could serve a critical role in helping to regulate the development and fusion of these two technologies.

From a top-down perspective, the United States government has recently taken interest in the development of what is commonly known as ethical AI. In February of this year, a resolution, “Supporting the development of guidelines for the ethical development of artificial intelligence,”, was introduced in the White House. However, this high-level conversation around the ethics of AI has yet to extend to quantum. This is due to many factors, including the public’s lack of engagement with the topic of quantum technologies, which can be challenging to understand in the abstract. Nonetheless, the government of the United States’ policy formation around the prospective social impacts of AI applications lends hope to the development of a similar approach to quantum technologies.

The Defence Advanced Research Projects Agency (DARPA) is the agency of the  United States’ DoD responsible for funding and coordinating the development of emerging technologies for military applications. It is also the biggest funder of basic science in the U.S.. While DARPA pours billions of U.S. dollars into the development of emerging technologies, funding for research into the social impact of these is severely lacking. As we have urged previously, “social impact research has the potential to make systems run smoother, more ethically and sustainably, giving an advantage to a science and technology environment that takes a long view rather than a short-term perspective.”. Further, the DoD’s reliance on the “techno-moralists” to deliver the technological advancements needed to maintain U.S. military supremacy also presents an important argument for a shift in this imbalance. Given the context of Silicon Valley’s tech culture, this is not a debate that will be won by DoD money or muscle. Rather, it is one that must appeal to a sense of ethics. This can only be done by government demonstrating that there is a strong commitment to understanding the social impacts that the future applications of quantum-AI technologies will undoubtedly produce.