Artificial Intelligence

Artificial Intelligence

The Making of Ethical Machines


No Comments
BLADE RUNNER 2049Niander Wallace pondering the creation of life altering new technologies in Blade Runner 2049. Image credit: Sony Pictures/Warner Brothers.

Alexander Vipond

Researchers at MIT have undertaken the world’s largest survey on the moral preferences of people to different variations of the trolley problem. The trolley problem’s basic premise is this: a vehicle is about to have an unavoidable accident and the driver must make a choice as to who or what the vehicle hits e.g. swerve right and hit a young man or swerve left and hit two old people?

Edmond Awad and his team collected over 39.6 million decisions from 233 countries through a specially designed mobile game and website. The game and website asked participants to weigh the ethical issues of different versions of the trolley problem according to 9 life indicators (which can be seen in infographic b below). Previously, most studies have relied on single indicators such as a preference for saving many lives over one rather than attempting to look at the complex interrelationships of multiple indicators. From the responses, the researchers were able to discern large scale patterns and trends from 130 of the countries to identify peoples’ key ethical preferences for the preservation of human life.

Moral Machine infographic

Hierarchical clusters of countries based on average marginal causal effect. One hundred and thirty countries with at least 100 respondents were selected. The three colours of the dendrogram branches represent three large clusters—Western, Eastern, and Southern. Country names are coloured according to the Inglehart–Welzel Cultural Map 2010–2014. Image Credit: Awad et al in Nature, ISSN 1476-4687.

They discovered three different ethical worldviews: The Eastern, the Southern and the Western (as displayed in infographic A). These groups agreed on some basic principles and diverged on others. They shared three major preferences. That young people should be spared over others, that many people should be spared over a few and that humans should be spared over other species. These preferences traversed different cultural, economic, political and religious boundaries.

However, as you can see in the radar plots of infographic b, Eastern, Southern and Western views also express sharply different preferences across the spectrum of the nine life indicators. The Western view skews towards saving the young, the many and taking no action at all, giving the choice to chance. The Eastern view skews towards saving the lawful, humans and pedestrians whilst the Southern view prioritises women, the young and high-status individuals.

Machines are on the verge of being programmed to make life-altering choices, a turning point in history. The questions Awad’s team raise over whether universal machine ethics are possible and whether societies can reach consensus over the use of intelligent technologies are a crucial step in discussing what sort of world we want to live in as we undergo the Fourth Industrial Revolution.

While the world is focused on the threat of killer machines on the battlefield, machine decision-making will pose challenges in times of war and peace. This research tests the limits of universal standards as country specific preferences emerge from the complexity of weighing multiple factors. The scalability of new intelligent technologies may be limited by their adaptability to different cultural environments with varying ethical standards. Geo-strategic tensions and ethical dilemmas over who has the power to control these choices, the diversity of datasets used to make technology and the research used to justify life altering choices will affect company, consumer and government.

For example, moving to a different country in the future may mean moving to a set of new technological moral compasses which will have different criteria, levels of access and personalisation, dependent on the rules of the society.

The Moral Machine experiment is only a snapshot in time; a poll of preferences that remains fluid. Ethical standards will require sensible discussion and update periods to reflect changes in the community. Awad notes that the situations presented rely on 100 per cent certainty of the events occurring and 100 per cent certainty of recognising the targets. In the real world there is a much greater level of uncertainty in these processes.

Beyond this lies the extreme technological challenge for engineers and scientists of how to weigh the vast array of preferences with any semblance of granularity. Can your car accurately evaluate someone’s societal status in the 3.2 seconds before a crash? That technology has yet to arrive. However, in some countries the autonomous car might link with the mobile phones of surrounding pedestrians and choose the person with the lowest social credit score by proxy.

As intelligent and networked technologies continue to develop and impact our lives they will increasingly become imbued with formalised versions of the rules that govern our societies. The collective may gain power over the individual. What we have previously left to chance and split-second decision making, we will now expect to be pre-programmed with precision into machines. As Awad’s research shows countries and communities need to start national and regional conversations about what should be delegable to an autonomous machine and how it is operated, before it is decided for them.

 

 

 

Artificial Intelligence

Facial Recognition in Democracy


No Comments
Smart contact lens based facial recognition, one possible future. Glasses based technology is already in use. Image Credit: Mission Impossible 4, Paramount Pictures.

Alexander Vipond

Microsoft President and Chief Legal Officer Brad Smith has called for the United States government to appoint a bipartisan expert commission on how to regulate facial recognition technology. In a blog post on the company’s website, Smith responded to allegations that Microsoft had provided the US Immigration and Customs Enforcement (ICE) with facial recognition technology earlier in January this year. The recent of role of ICE in the deportation and separation of immigrant families in the US has drawn large scale protests and widespread backlash across the country.

The possibility that this technology could have been used for such purposes saw Microsoft in the firing line, frantically seeking to explain the nature of their contract. Smith clarified that the contract is limited to “supporting legacy email, calendar, messaging and document management workloads” and reiterated that Microsoft does not support the policy.

This became the catalyst for Smith to call for the regulation of facial recognition technology. He posed two scenarios for the future use of the technology one positive, one negative.

Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.

The problem is those futures have already arrived.

In June, Jarrod Ramos murdered five journalists at the Capital Gazette in Anne Arundel County, Maryland over reporting of his criminal past. As he attempted to exit the scene by blending in with people fleeing, he refused to provide identification to local officers and fingerprinting failed. He was identified with facial recognition technology from a collection of state databases.

In non-democratic countries, governments are already rolling out persistent large scale surveillance of citizens using big data and facial recognition. And while the stores at the shopping mall aren’t sharing with each other yet, they have been deploying dual use facial recognition technology to prevent shoplifting and monitor customers’ in-store browsing habits to increase sales.

So, the future is already here. What do we do about it?

Behind the facial match is the data. Who has access to it? How and why is it used? In what situations can different levels of government employ the technology and what restrictions are there on private sector use? How long is biometric data kept for and can it be kept safe?

These are basic questions currently left unanswered by US law and that of many countries around the world. Smith himself raises a series of ethical issues about the legal standards of facial recognition algorithms. The spectrum ranges from persistent passive collection of everyone’s data, to data which is only used in a ticking time bomb situation. This variance reveals both the scale of the technology and the governance space democracies must define.

As Smith notes, this does not absolve technology companies of responsibility. Microsoft has appointed an AI ethics board to help navigate new technologies and vet potential customers. This intrinsic method of fostering good governance is equally important to regulation (although it cannot replace it) in helping create a culture of ethical technology development. This is because companies can resist regulatory frameworks and view them as a series of loopholes to exploit and costs to be avoided. Both are necessary to create an environment in which the potential benefits of facial recognition technology can be reached without its abuse damaging democratic freedom.

However, the distinct problem of dual use applications of AI poses questions that should be answered before the final sign off. The Intelligence Advanced Projects Research Agency (IARPA) in the US has a set of questions it asks of proposed technologies. Questions such as if the technology is leaked, stolen, or copied, would we regret having developed it? Can the technology be made less prone to theft, replication and mass production? What is your estimate for how long it would take a major nation competitor to weaponize this technology after they learn about it? In light of the geo-strategic implications of AI products, major technology companies should consider these questions and add to them.

This addition should be a set of civil questions focused on how a proposed AI technology operates within a democracy. Questions on how the risk of discrimination could be mitigated. How will this technology anonymise and delete data it collects? Can this technology scale to affect democracy, in what ways? Questions like these could help tech companies balance their responsibility to civil society and the international community. This requires wider societal discussions on what rights citizens need in the digital age. This discussion must occur now, for as we have seen the future is already here. It is from this base of regulation and corporate governance that democracies can refine ideas on how to positively contribute to the global debate on AI ethics.

 

 

Artificial Intelligence

Collective Intelligence: DeepMind’s AI plays Quake 3


No Comments

Alexander Vipond

British AI firm DeepMind has a penchant for playing games. From Atari to Go, DeepMind researchers have been training AI agents to compete against themselves and beat human competitors.

The latest challenge was a specially modified version of a gaming classic Quake III A multiplayer, first-person shooter video game from the turn of the millennium, Quake III sees teams of aliens and humans vie for supremacy. Deep Mind have taken this premise to the next level: now it is AI agents versus humans.

Through reinforcement learning, AI agents were tasked with learning to capture the enemy flag from their opponents’ base over a series of procedurally generated environments in which no two games were the same. The aim was to see if AI agents could learn strategies and cooperate together in complex three-dimensional environments with imperfect information, that is, without knowing where the other team’s players are. To do this, Deep Mind created a new type of agent dubbed FTW (For the Win) that could successfully rise above traditional methods of AI competition and exceed human standards.

By training a whole population of FTW agents in two timescales (fast and slow), thereby increasing agents’ consistency, and using a two-tiered rewards scheme to incentivise action regardless of a win or loss, FTW agents were able to learn a higher standard of gameplay. Training a population in parallel not only proved to be more efficient but revealed a diversity of approaches, as agents optimised their own unique internal reward signals.

 

Credit: DeepMind

After an average of 450,000 training games, the FTW agents were placed in a tournament with forty human players who they could play with as teammates or adversaries. In order to ensure a fair fight, researchers engineered balance into the game to counter the AI agents’ reaction time advantage over the human eye’s natural frame rate processing limit (lower than the game’s 60 frames per second).

On average, FTW agents achieved a higher win-rate than human players with strong skill scores. The agents learnt how to follow teammates, defend their base and “camp” at the enemies’ base, picking off players as they spawned. They achieved the greatest collaboration in teams of four, although struggled to maintain this as the number of players expanded.

 

Credit: DeepMind

As usual, it’s not the game itself that represents progress here but the evolving capacity for AI agents to develop cooperative behaviour. DeepMind has demonstrated the capability of AI agents to work in small teams, alongside humans and other AI agents, towards a shared goal. The more AI agents can work together to manage uncertain environments and imperfect knowledge the better they will perform when faced with the chaos of the real world.

 

Artificial Intelligence

Google and the Ethics of AI Development


No Comments
Dr Robert Ford Source John P Johnson HBOIn Westworld, Dr Robert Ford ponders the artificial intelligence products he has created. Credit Johnson/HBO

Alexander Vipond

In recent months, Google has faced an internal revolt from its staff. Resignations, internal petitions and protests have been part of a heated debate over a US $9 million contract with the Pentagon to interpret drone footage and improve targeting systems. The contract, part of the Pentagon’s artificial intelligence initiative Project Maven was expected to grow into a $250 million venture.

However within days of the New York Times breaking the story that senior staff and executives were anticipating the fallout and trying to remove the words AI from the discussion, Google Cloud CEO Diane Green announced that the company would not be renewing the contract in 2019.

As Fei-Fei Li, Chief scientist at Google Cloud wrote in one of those emails, “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google”.

The saga raises important questions about AI development. For Google, the company whose motto has evolved from “Don’t be Evil” to “Do the Right Thing”, the contract represented a cultural crisis about the direction of the company.  It also represented a business crisis: face losing extremely talented staff or highly lucrative defense contracts. One of Alphabet Inc’s most important AI subsidiary companies, DeepMind has refused to work on military and intelligence weapons. Against this backdrop, Google is updating its ethics and guidelines to provide a new framework for its AI product development. The outcome remains to be seen but Google has said it will make the document publicly available.

Amazon, IBM and Microsoft have also competed for Pentagon contracts in this field with far less scrutiny. Project Maven will move forward with or without Google. In response to questions at a Google wide meeting, co-founder Sergey Brin stated he feels it will be better for peace if global militaries work with international organisations such as Google, rather than with nationalistic defence contractors.

Herein lies the complexity. The concept of powerful AI is a controversial topic, not only in the US but worldwide, with major industry figures divided on the issue of what could arise from it. Weaponised AI has been at the forefront of the debate. Given the state of US politics right now, it is understandable why Google employees might feel apprehensive about how their algorithms are used.

If we zoom out to the global context, the world is clearly in the midst of a long-term arms race in AI.  Military powers across the world are pursuing a multitude of intelligence and weapons applications for AI.  Vladimir Putin has said whoever masters AI will be ruler of the world; whilst in China, AI is a core component of their national strategy to become the world’s dominant science and technology power.  In both market and military terms, the United States stays out of the competition at its own peril.

Whether you are an employee sitting at your desk in one of the worlds’ biggest tech companies, a president or a dictator, the rules of the game are fluid. Most scientists and programmers share their AI research and code across international boundaries to develop technologies that will assist humanity and for commercial gain through shared knowledge platforms. But without commonly agreed ethical rules and guidelines, the exploitation of AI knowledge by individual countries acting in their own self-interest is likely to lead to a race to the bottom and a tragedy of the commons. Efforts like the Campaign to Stop Killer Robots face the allure of the cheap asymmetric power that AI technologies can offer; and the range of AI’s dual-use applications means it is unlikely that a global ban would be effective.

As the race spreads and gathers pace, so too must the tenor and tempo of the AI debate.  Controversies within multi-national companies can be usefully redirected into an open discussion on how best to bridge private and public, national and international interests.  At the top of the agenda should be clear operating guidelines to mitigate accidents, prevent atrocities and safeguard basic human rights. International politics in an era of heteropolarity is difficult but as the inner turmoil at Google reveals, creative leadership, public accountability and a modicum of global governance are vital if human and artificial intelligence are to coevolve ethically and peacefully.