In Westworld, Dr Robert Ford ponders the artificial intelligence products he has created. Credit Johnson/HBO
Alexander Vipond
In recent months, Google has faced an internal revolt from its staff. Resignations, internal petitions and protests have been part of a heated debate over a US $9 million contract with the Pentagon to interpret drone footage and improve targeting systems. The contract, part of the Pentagon’s artificial intelligence initiative Project Maven was expected to grow into a $250 million venture.
However within days of the New York Times breaking the story that senior staff and executives were anticipating the fallout and trying to remove the words AI from the discussion, Google Cloud CEO Diane Green announced that the company would not be renewing the contract in 2019.
As Fei-Fei Li, Chief scientist at Google Cloud wrote in one of those emails, “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google”.
The saga raises important questions about AI development. For Google, the company whose motto has evolved from “Don’t be Evil” to “Do the Right Thing”, the contract represented a cultural crisis about the direction of the company. It also represented a business crisis: face losing extremely talented staff or highly lucrative defense contracts. One of Alphabet Inc’s most important AI subsidiary companies, DeepMind has refused to work on military and intelligence weapons. Against this backdrop, Google is updating its ethics and guidelines to provide a new framework for its AI product development. The outcome remains to be seen but Google has said it will make the document publicly available.
Amazon, IBM and Microsoft have also competed for Pentagon contracts in this field with far less scrutiny. Project Maven will move forward with or without Google. In response to questions at a Google wide meeting, co-founder Sergey Brin stated he feels it will be better for peace if global militaries work with international organisations such as Google, rather than with nationalistic defence contractors.
Herein lies the complexity. The concept of powerful AI is a controversial topic, not only in the US but worldwide, with major industry figures divided on the issue of what could arise from it. Weaponised AI has been at the forefront of the debate. Given the state of US politics right now, it is understandable why Google employees might feel apprehensive about how their algorithms are used.
If we zoom out to the global context, the world is clearly in the midst of a long-term arms race in AI. Military powers across the world are pursuing a multitude of intelligence and weapons applications for AI. Vladimir Putin has said whoever masters AI will be ruler of the world; whilst in China, AI is a core component of their national strategy to become the world’s dominant science and technology power. In both market and military terms, the United States stays out of the competition at its own peril.
Whether you are an employee sitting at your desk in one of the worlds’ biggest tech companies, a president or a dictator, the rules of the game are fluid. Most scientists and programmers share their AI research and code across international boundaries to develop technologies that will assist humanity and for commercial gain through shared knowledge platforms. But without commonly agreed ethical rules and guidelines, the exploitation of AI knowledge by individual countries acting in their own self-interest is likely to lead to a race to the bottom and a tragedy of the commons. Efforts like the Campaign to Stop Killer Robots face the allure of the cheap asymmetric power that AI technologies can offer; and the range of AI’s dual-use applications means it is unlikely that a global ban would be effective.
As the race spreads and gathers pace, so too must the tenor and tempo of the AI debate. Controversies within multi-national companies can be usefully redirected into an open discussion on how best to bridge private and public, national and international interests. At the top of the agenda should be clear operating guidelines to mitigate accidents, prevent atrocities and safeguard basic human rights. International politics in an era of heteropolarity is difficult but as the inner turmoil at Google reveals, creative leadership, public accountability and a modicum of global governance are vital if human and artificial intelligence are to coevolve ethically and peacefully.
Leave a Reply