Susannah Lai
The next panel of the day focused on technical areas adjacent to quantum physics itself, investigating the impact of quantum developments in other areas of science. This panel was chaired by Professor Colin Wight, featuring Professor Toby Walsh, Ms Tara Roberson and Dr Abhijnan Rej, representing a wide range of disciplines that might be affected by advances in the field of quantum physics.
Professor Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW, leader of the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research and an elected a fellow of the Australian Academy of Science, spoke first, leading the panel by discussing the current state of AI and what quantum may do for the field. For the most part, Professor Walsh explained, the only benefit he could see quantum computing providing was a boost in processing power.
However, he said, this was not enough to truly advance AI. Currently, AI lacks some fundamental aspects of being ‘intelligent’; more processing power is not enough to fix this, although it is likely that the solution would require additional processing power. The field is running up against different issues of intelligence, such as how to emulate consciousness, self-awareness and other similar questions. According to Professor Walsh, if more processing power became available now, it would just create a ‘faster thinking dog’ rather than anything close to a Turing machine.
Possibly, he acknowledged, human consciousness could possess a ‘quantum’ element to it, but this could be simulated on conventional computers, so a quantum computer is not required. Fundamentally, human consciousness remains an elusive concept, especially with regard to emulating it, and this is what current AI is lacking. While the general public has the perception that human level AI will be achieved sometime in the next 50 years, scientists in the field have a very different estimate; some think it will never be achieved, while most others believe that it will be sometime at the end of this century.
In terms of the political aspects of the field, Professor Walsh asserted that the AI race is more ‘defined’ that the ‘quantum race’; there are clearer goals and milestones, places where there is a certain level of achievement that indicates a solid, comprehensible impact on technology and society. As an example, he brought up the recent achievement where a computer AI, ‘AlphaGo’ was able to beat expert human ‘Go’ players. ‘Go’ a classic game in East Asian culture, particularly China, and considered to be a game of intellectuals, even more so than chess. Go is often depicted in paintings as being played by sages and other great thinkers. Perhaps a Western equivalent would depict Newton, Einstein, Galileo, Da Vinci or Socrates playing the game. When played by a human, Go requires both the ability to think and plan strategically to capture an area safely, as well as to predict an opponent’s moves and attempts to capture territory.
Walsh noted that this development shocked East Asian countries, particularly China. He referred to the advancement as a ‘Sputnik moment’ because it was an occasion when one power realizes the extent of the advancement of a particular technology gained by another, usually opposing, power. Since this moment, China has started to invest heavily in AI research, as opposed to a certain level of disinterest in the field previously. They were very much shocked by the idea that a computer could master a game such as ‘Go’. The level of Chinese investment has now grown to the point that Ali Baba, a single tech company in China, is spending more on AI research than some nations—an estimated 15 billion dollars. Along with this level of investment, China’s access to large amounts of data has contributed to it dominating the field. It is the true competitor to tech giants like Google and Facebook for development in AI.
As for the consequences of the ‘race’ to develop both quantum and AI technologies, Professor Walsh debated if the term was being thought about in the wrong way. After all, he said, the ‘electric race’ was to the benefit of everyone. Additionally, he asserted, while there is risk in further developing AI technology, he believes that the risk is largely negligible. AI technology, he mused, is far more likely to change the world by being used when it was the current ‘dumb AI’, rather than a future theoretical ‘smart AI’.
Following this, Ms Tara Roberson, a PhD candidate at Australia’s National Centre for Public Awareness of Science at ANU and President of the SEQ Australian Science Communicators branch, spoke on the topic of communicating research and technological developments, and ‘hype’ in particular. She concentrated on how hype around quantum is affecting the discourse and whether or not it is a boon or a problem, with regard to both technological developments and policy making around quantum.
There are, she explained, many effects of ‘hype’ on a developing technology but in the context of the panel she chose to focus on the question: ‘can quantum hype serve a purpose?’. She explained ‘hype’ in general as a ‘convergence of assumptions’, wherein various parties impose on a science their own ideas and conceptions of what could be developed without necessarily considering the actual state of development. This was, Ms Roberson said, an overall problem in the wider scientific community.
Hype can work to the benefit of science and of science communication to the public when it opens discussion to the possibilities of research in an area by talking about how things might develop. However, problems arise with respect to quantum as it has been conceptualized as a highly disruptive technology and as a result it has created the idea that nations will need to race to develop in order to keep their place in the world. This is a major contributor to the viewing of development in quantum technology as a ‘race’, and at least by some policy makers, a competitive race.
On a more positive note, Ms Roberson did acknowledge that hype could be put to good use, employing it as a way to change the dialogue to a more peaceful and co-operative one by looking toward what quantum technology could do for the world as a whole instead of one nation or another achieving supremacy. For example, quantum key distribution, or using quantum as a true random number generator in the field of cryptography, to make a more secure online environment.
The discussion then took a turn for the distinctly more sombre as Dr Abhijnan Rej, a New Delhi-based defence analyst and political-risk consultant with extensive qualifications in higher mathematics, including string theory, as well as an impressive resume of work in public policy, took the stage. He focused on what AI actually is, and its possible applications in a military or national security context. For the most part, he discussed this in the context of the tense relationship between India and Pakistan, even more relevant now in light of recent escalations.
On the subject of the conceptualization of AI and what it means, Dr Rej was of the opinion that the definition and ideas about AI changed from generation to generation, but that it basically included the idea of the ‘Turing test’, where an AI is able to provide human level interaction to the point that it is indistinguishable from an actual human; ‘expert systems’, a system that can emulate the decision-making process of a human expert in a given analysis; and ‘learning machines’, where an AI is fed data and is able to analyse this to self-improve decision making algorithms. These are all fundamental expectations of AI that are common to what people think AI represents.
Of course, there is great potential for harm in the systems mentioned when paired with military usage. If an autonomous system is placed at a sufficient decision-making level, it could possibly react to incidents indiscriminately, without reference to the nuances present in international relations, particularly in the area of armed conflict. While it is theoretically possible for an AI to be programmed to include all the current laws around international armed conflict when making decisions, these laws are still heavily subject to interpretation, and are usually considered in concert with the variables and context of specific conflicts or situations before decisions are made. There is such a breadth of possibilities and complex issues surrounding the circumstances of conflict that it would be impossible to anticipate and therefore plan and program them all.
There are more issues to consider in this vein; AI systems retain all the initial biases of the people that program them. No matter how complex the decision trees, or expert systems, or even learning algorithms, the AI will operate based off the initial rules that were programmed into it. This becomes a problem when faced with rules and conditions that are subjective in nature, for which international law is a classic example. Any AI programmed to take actions based on this would be bound to the interpretation that its programmers held. Considering that current AI is also notoriously unpredictable and unreliable even with strict, objective rulesets, the idea of an AI in charge of an automated response military strike system is terrifying.
Dr Rej brought up the concept of a ‘dead hand switch’ or an automatic second strike that would launch without human interference in response to a detected offensive strike. As an example of one of the ways this could go very wrong, he spoke about the story of ‘Dr Strangelove or: How I Learned to Stop Worrying and Love the Bomb’, and Herman Kahn, one of the figures that inspired that story. In Dr Strangelove, a second-strike system is constructed by Russia, a ‘doomsday weapon’ that is unable to be stopped or disarmed once triggered. At the same time, a rogue American general orders a first strike in response to an imagined attack. The leaders of both nations try to halt the progression of events, but the inevitable happens, and the film ends with a montage of nuclear explosions.
Herman Kahn was an advocate for ‘second-strike’ systems, specifically ones that were armed with nuclear weapons, on the theory that this would create enough deterrence that the Soviet Union would not openly attack the US. Somewhat alarmingly, at least for the average citizen, Kahn was of the opinion that whether there was mass death in the form of ‘hundreds of million’, or ‘merely a few major cities’, this was not ‘the end of the world’ and the US needed to accept the potential consequences of a nuclear weapon exchange in order to threaten to ‘push the button’, which was, in turn, necessary to the strategic use of a ‘second strike’ deterrence. This philosophy makes the use of AI in second strike alluring because if major parts of the command structure are wiped out in the initial attack, the AI would be primed to respond. This is an uneasy thought. There was an incident not long ago in Hawaii where an automated alert was sent out and although this was triggered by human error, it is not a stretch to imagine that a similar error could be made to trigger an automatic launch of a nuclear weapon.
After this, Dr Rej moved on to discuss the concept of C2ISTAR; Command and Control, Intelligence, Surveillance, Target Acquisition, and Reconnaissance, in the context of AI usage. In terms of ‘C2’ according to Dr Rej, the US ‘nuclear codes’ are currently just a series of menus and he speculates that AI expert systems could be used in this area instead. Generally, the use of AI in C2 would be in the operation of systems and streamlining operational structures. The question of the use of AI in the ISTAR part of the equation is more complex.
There are, he said, claims of Russia having hypersonic missiles, and furthermore, discussions of using AI in these missiles as guidance and targeting systems. This is a fairly common suggestion of the implementation of AI in the ‘TA’ (target acquisition) part of ISTAR. The use of basic AI in the onboard guidance systems for missiles is a well-established use of AI. However, the use of AI in other parts of ISTAR has not been so thoroughly explored. Dr Rej commented on one particular area: the ability of AI to help with processing so-called ‘big data’. This is where large amounts of information, usually relating to a particular area of interest, is collected and then analyzed for patterns and trends, offering conclusions and insights, which are then extrapolated to predict future results. As for the specific uses of this function, Dr Rej described the current nature of some of the ongoing conflict between India and Pakistan.
Pakistan, he explained, has in its arsenal tactical nuclear weapons, and a policy that they can be deployed during normal warfare to achieve strategic goals. Moreover, Pakistan has a policy of ‘first strike’. By contrast, India has no such tactical nuclear weapons and a policy that forbids their use in normal warfare. India also has a policy of ‘second strike’ only. These policies, as commendable as they are, put India at a considerable strategic disadvantage when dealing with matters of conflict with Pakistan. In order to make up for this disadvantage, India has developed a ‘counter-force’ strategy, particularly around analyzing where Pakistani missiles are stored.
It is here, Dr Rej said, that the ‘the quantum spin’ could come in. The border between Pakistan and India, specifically Kashmir, is mountainous and difficult to scan and traverse. As a result, it is difficult for India to determine the locations where Pakistan has stored armaments or missile silos, or indeed any military installments that could threaten India in a serious manner. While they have collected extensive data on the region via various means, including scanning with satellites and geo-scanning, they lack the means to process this raw data into useful information on the geography that could point to installments. The promise of quantum computing to provide boosts in either computational power or advanced AI processing would make analyzing this data for useful information feasible.
After this, the floor was opened to questions, which were spilt between the heavier topics discussed and the topic of hype and scientific communication. One question brought up conflict in terms of India and China or non-state actors as opposed to with Pakistan, to which Dr Rej answered that China, unlike Pakistan, does not in fact have tactical nuclear weapons, noting that the lack of tactical nuclear weapons puts India and China on a more even strategic footing in that specific regard. There were also questions on the potential of AI in identifying C2 communications occurring across social media, presumably in an effort to disguise them while retaining accessibility for the relevant contacts. Dr Rej also responded to this question, commenting that India is interested in this, considering its relevance to security. Moving onto topics that related more to science communication, a question was asked regarding the tendency to not talk about the potential for peace in quantum, which received the comment that quantum is regarded as a disruptive technology, which biases conversations towards conflict or threats to global security and therefore the bent would be war rather than peace.
Leave a Reply