Artificial Intelligence

Artificial Intelligence

Moving Beyond Ethics in Data Science


No Comments

Image Credit: Centre for Translational Data Science, University of Sydney.

Alexander Vipond

At the recent Ethics of Data Science conference, hosted by the Centre for Translational Data Science at the University of Sydney, an interdisciplinary panoply of software engineers, machine learning experts, clinicians and lawyers came together to discuss how artificial intelligence and big data is changing society.

What quickly became clear was that technological advancement in the field is moving so fast that participants were grappling with not only the recent and future impacts on their industries but the sheer pace of technological change itself.

Some presenters argued that the recent talk of ethical AI principles from big tech companies was merely a form of ethics washing, a strategic ploy to delay and weaken regulation on privacy, dangerous content and data rights. Other speakers opined that ethics were simply not enough: that in order for them to be of real value to society we need to move beyond ethical principles to enforceable laws, changes to organisational cultures and clear regulations.

Many of the legal experts in attendance outlined the knowledge gap between technologists and other parts of society, citing the need to properly educate judges, policymakers and politicians on AI so they can make informed decisions. These arguments highlighted the Australian Government’s recent push to strengthen penalties for companies who breach privacy regulations, accompanied by an increase in funding for the Office of the Information Commissioner to pursue data breaches. The recent acknowledgement of Attorney General Christian Porter, as well as by panelists at the conference, that Australian data laws are insufficient to protect citizens in the current environment led to many proposals for change.

These included Australian states joining the European Union’s General Data Protection Regulation and adopting international human rights law as a framework for wider regulation of emerging technologies. There was also a concerted focus on how to protect marginalised communities most at risk of exploitation. For example, many presenters noted algorithms that reinforced racism in US prison sentencing or sexism in recruitment practices. 

On this front, many of the technical presentations delivered a variety of methods to ensure greater fairness in the design process of machine learning algorithms and outlined the important technical limitations and trade offs that needed to be considered when companies want to harness the power of artificial intelligence. The difference between ethical principles and the formal mathematical models used to embed them in tech, the types of questions machine learning can and can’t answer, and how to reduce bias in data-sets gave the interdisciplinary audience a display of the improvements that could be made with creative thinking and a consideration of a broader worldview.

This gave rise to questions of how to address inclusiveness in the industry and the geopolitical spectre of business and state-based competition. For while this has led to huge investment it has also prompted a new technological race, the consequences of which must be balanced so that positive breakthroughs for society can be maximised and risks can be addressed. The foundation of clear laws and a national strategy on AI in Australia (with funding to support implementation) are yet to be laid. The conference gave participants a window into what organisational coordination and creative solutions could be embraced with strong leadership from government and industry.

The author would like to thank Dr Roman Marchant, Professor Sally Cripps, Professor Nick Enfield and the Advisory board for organising the conference.

Artificial Intelligence

The Robots in Your Supermarket


No Comments

Jayson Waters

AI and other algorithmic technologies have long played a major role in global society and governance. We have previously explored how ‘dumb’ AI supports and maintains fundamental infrastructure and services. In an interesting turn that will impact your shopping basket more than warfare, supermarket chains around the world have begun implementing AI enabled facial recognition and tracking in their stores.  

According to a recent article by Bryan Pearson published in Forbes, stores such as Walgreens, Guess, and Kroger are using AI systems to tailor and target ads to customers. Unlike traditional sales database systems that can only track individual products, AI systems can track purchasing patterns and trends en masse. In turn this information can be used to better determine the susceptibility of customers to various ads and ultimately adapt the shopping experience – from lighting to pricing – to individual users in-store.

Guess and Alibaba have teamed up to create a flagship smartstore in which everything from mirrors, clothing racks, and fitting rooms will have embedded AI. According to Edward Park, senior vice president at Guess North America, “Every item is enabled with Bluetooth low-energy chips, RFID and motion sensors, which enable all of our inventory to be tracked and analyzed.” This data, if analysed properly will also allow Guess to monitor traffic patterns and customer interest in items.

On the plus side AI technologies could allow companies to better predict which products will appeal to customers and thus avoid waste by not producing certain items. Alternatively, the greater depth of knowledge regarding customer preferences may allow advertisers to influence customer choices to a greater degree than ever before.

See here for the full article.

Artificial Intelligence

Words of Consequence


No Comments
Many hands make light work when AI can automate content generation. Image credit: Ghost in the Shell (2017)

Alexander Vipond

OpenAI, a New York based AI research organisation, has created a new text generator called GPT-2. The transformer based AI language modelling system is capable of generating large amounts of high-quality synthetic text, perform reading comprehension, machine translation and summarisation in seven languages.

GPT-2 is designed to predict the next words that come from a text sample. Trained on 8 million webpages from the internet, it utilises zero-shot learning, which means that it can extrapolate short text samples into full articles based on entirely new information

As a general-purpose system, GPT-2 has been able to beat several records held by different domain specific language modelling systems and significantly lowers the complexity and cost of deploying scalable and customisable text generation.

Here’s an example:

Sample prompt: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

Model Completion (machine-written, first try)

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.”

The system constructs a story, creating fake institutions, places and quotations; blending them to produce the illusion of authenticity. The model isn’t perfect and it may take several tries to get a human quality tranche of text but the speed and scale of the system are such that OpenAI has withheld the full system from publication.

This is because GPT-2 could be used by malicious actors to commit large scale information warfare and undermine the one of the central principles of the cybersecurity triad: the integrity of information.

OpenAI has acknowledged this threat, citing GPT-2’s capacity to generate fake news en masse, impersonate others, automate fake content, phishing and spam. Through tweaking the system one can produce infinite positive or negatively angled articles. It is also possible one could customise it for specific issues to improve the veracity of supporting information in the synthetic content it produces, making it all the more difficult to tell fact from fiction. OpenAI have stated they expect tools like GPT-2 to be available in the next two years.

As dictatorships and authoritarian regimes actively seek to spread misinformation to disrupt elections, obfuscate wars, and insist assassins prefer to spend their time admiring English churches, GPT-2 is a highly attractive tool and a warning of what’s to come.

The malicious use of AI tools will challenge the integrity of the global digital commons, fuelled by states who view the open flow of information as a threat to their governance. The tools will then be passed down to organised crime and developing regimes. As the recent case of Project Raven shows, even as countries are increasingly trying to secure their intellectual property; their cyber tools and tactics are up for sale.

As William Gibson once said “the future is already here, it’s just unevenly distributed”. So now that we know the threat is here, what can we do to counter the risks at the different levels of its distribution?

OpenAI will continue their research.

Artificial Intelligence

The Making of Ethical Machines


No Comments
BLADE RUNNER 2049Niander Wallace pondering the creation of life altering new technologies in Blade Runner 2049. Image credit: Sony Pictures/Warner Brothers.

Alexander Vipond

Researchers at MIT have undertaken the world’s largest survey on the moral preferences of people to different variations of the trolley problem. The trolley problem’s basic premise is this: a vehicle is about to have an unavoidable accident and the driver must make a choice as to who or what the vehicle hits e.g. swerve right and hit a young man or swerve left and hit two old people?

Edmond Awad and his team collected over 39.6 million decisions from 233 countries through a specially designed mobile game and website. The game and website asked participants to weigh the ethical issues of different versions of the trolley problem according to 9 life indicators (which can be seen in infographic b below). Previously, most studies have relied on single indicators such as a preference for saving many lives over one rather than attempting to look at the complex interrelationships of multiple indicators. From the responses, the researchers were able to discern large scale patterns and trends from 130 of the countries to identify peoples’ key ethical preferences for the preservation of human life.

Moral Machine infographic

Hierarchical clusters of countries based on average marginal causal effect. One hundred and thirty countries with at least 100 respondents were selected. The three colours of the dendrogram branches represent three large clusters—Western, Eastern, and Southern. Country names are coloured according to the Inglehart–Welzel Cultural Map 2010–2014. Image Credit: Awad et al in Nature, ISSN 1476-4687.

They discovered three different ethical worldviews: The Eastern, the Southern and the Western (as displayed in infographic A). These groups agreed on some basic principles and diverged on others. They shared three major preferences. That young people should be spared over others, that many people should be spared over a few and that humans should be spared over other species. These preferences traversed different cultural, economic, political and religious boundaries.

However, as you can see in the radar plots of infographic b, Eastern, Southern and Western views also express sharply different preferences across the spectrum of the nine life indicators. The Western view skews towards saving the young, the many and taking no action at all, giving the choice to chance. The Eastern view skews towards saving the lawful, humans and pedestrians whilst the Southern view prioritises women, the young and high-status individuals.

Machines are on the verge of being programmed to make life-altering choices, a turning point in history. The questions Awad’s team raise over whether universal machine ethics are possible and whether societies can reach consensus over the use of intelligent technologies are a crucial step in discussing what sort of world we want to live in as we undergo the Fourth Industrial Revolution.

While the world is focused on the threat of killer machines on the battlefield, machine decision-making will pose challenges in times of war and peace. This research tests the limits of universal standards as country specific preferences emerge from the complexity of weighing multiple factors. The scalability of new intelligent technologies may be limited by their adaptability to different cultural environments with varying ethical standards. Geo-strategic tensions and ethical dilemmas over who has the power to control these choices, the diversity of datasets used to make technology and the research used to justify life altering choices will affect company, consumer and government.

For example, moving to a different country in the future may mean moving to a set of new technological moral compasses which will have different criteria, levels of access and personalisation, dependent on the rules of the society.

The Moral Machine experiment is only a snapshot in time; a poll of preferences that remains fluid. Ethical standards will require sensible discussion and update periods to reflect changes in the community. Awad notes that the situations presented rely on 100 per cent certainty of the events occurring and 100 per cent certainty of recognising the targets. In the real world there is a much greater level of uncertainty in these processes.

Beyond this lies the extreme technological challenge for engineers and scientists of how to weigh the vast array of preferences with any semblance of granularity. Can your car accurately evaluate someone’s societal status in the 3.2 seconds before a crash? That technology has yet to arrive. However, in some countries the autonomous car might link with the mobile phones of surrounding pedestrians and choose the person with the lowest social credit score by proxy.

As intelligent and networked technologies continue to develop and impact our lives they will increasingly become imbued with formalised versions of the rules that govern our societies. The collective may gain power over the individual. What we have previously left to chance and split-second decision making, we will now expect to be pre-programmed with precision into machines. As Awad’s research shows countries and communities need to start national and regional conversations about what should be delegable to an autonomous machine and how it is operated, before it is decided for them.

 

 

 

Artificial Intelligence

Facial Recognition in Democracy


No Comments
Smart contact lens based facial recognition, one possible future. Glasses based technology is already in use. Image Credit: Mission Impossible 4, Paramount Pictures.

Alexander Vipond

Microsoft President and Chief Legal Officer Brad Smith has called for the United States government to appoint a bipartisan expert commission on how to regulate facial recognition technology. In a blog post on the company’s website, Smith responded to allegations that Microsoft had provided the US Immigration and Customs Enforcement (ICE) with facial recognition technology earlier in January this year. The recent of role of ICE in the deportation and separation of immigrant families in the US has drawn large scale protests and widespread backlash across the country.

The possibility that this technology could have been used for such purposes saw Microsoft in the firing line, frantically seeking to explain the nature of their contract. Smith clarified that the contract is limited to “supporting legacy email, calendar, messaging and document management workloads” and reiterated that Microsoft does not support the policy.

This became the catalyst for Smith to call for the regulation of facial recognition technology. He posed two scenarios for the future use of the technology one positive, one negative.

Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.

The problem is those futures have already arrived.

In June, Jarrod Ramos murdered five journalists at the Capital Gazette in Anne Arundel County, Maryland over reporting of his criminal past. As he attempted to exit the scene by blending in with people fleeing, he refused to provide identification to local officers and fingerprinting failed. He was identified with facial recognition technology from a collection of state databases.

In non-democratic countries, governments are already rolling out persistent large scale surveillance of citizens using big data and facial recognition. And while the stores at the shopping mall aren’t sharing with each other yet, they have been deploying dual use facial recognition technology to prevent shoplifting and monitor customers’ in-store browsing habits to increase sales.

So, the future is already here. What do we do about it?

Behind the facial match is the data. Who has access to it? How and why is it used? In what situations can different levels of government employ the technology and what restrictions are there on private sector use? How long is biometric data kept for and can it be kept safe?

These are basic questions currently left unanswered by US law and that of many countries around the world. Smith himself raises a series of ethical issues about the legal standards of facial recognition algorithms. The spectrum ranges from persistent passive collection of everyone’s data, to data which is only used in a ticking time bomb situation. This variance reveals both the scale of the technology and the governance space democracies must define.

As Smith notes, this does not absolve technology companies of responsibility. Microsoft has appointed an AI ethics board to help navigate new technologies and vet potential customers. This intrinsic method of fostering good governance is equally important to regulation (although it cannot replace it) in helping create a culture of ethical technology development. This is because companies can resist regulatory frameworks and view them as a series of loopholes to exploit and costs to be avoided. Both are necessary to create an environment in which the potential benefits of facial recognition technology can be reached without its abuse damaging democratic freedom.

However, the distinct problem of dual use applications of AI poses questions that should be answered before the final sign off. The Intelligence Advanced Projects Research Agency (IARPA) in the US has a set of questions it asks of proposed technologies. Questions such as if the technology is leaked, stolen, or copied, would we regret having developed it? Can the technology be made less prone to theft, replication and mass production? What is your estimate for how long it would take a major nation competitor to weaponize this technology after they learn about it? In light of the geo-strategic implications of AI products, major technology companies should consider these questions and add to them.

This addition should be a set of civil questions focused on how a proposed AI technology operates within a democracy. Questions on how the risk of discrimination could be mitigated. How will this technology anonymise and delete data it collects? Can this technology scale to affect democracy, in what ways? Questions like these could help tech companies balance their responsibility to civil society and the international community. This requires wider societal discussions on what rights citizens need in the digital age. This discussion must occur now, for as we have seen the future is already here. It is from this base of regulation and corporate governance that democracies can refine ideas on how to positively contribute to the global debate on AI ethics.

 

 

Artificial Intelligence

Collective Intelligence: DeepMind’s AI plays Quake 3


No Comments

Alexander Vipond

British AI firm DeepMind has a penchant for playing games. From Atari to Go, DeepMind researchers have been training AI agents to compete against themselves and beat human competitors.

The latest challenge was a specially modified version of a gaming classic Quake III A multiplayer, first-person shooter video game from the turn of the millennium, Quake III sees teams of aliens and humans vie for supremacy. Deep Mind have taken this premise to the next level: now it is AI agents versus humans.

Through reinforcement learning, AI agents were tasked with learning to capture the enemy flag from their opponents’ base over a series of procedurally generated environments in which no two games were the same. The aim was to see if AI agents could learn strategies and cooperate together in complex three-dimensional environments with imperfect information, that is, without knowing where the other team’s players are. To do this, Deep Mind created a new type of agent dubbed FTW (For the Win) that could successfully rise above traditional methods of AI competition and exceed human standards.

By training a whole population of FTW agents in two timescales (fast and slow), thereby increasing agents’ consistency, and using a two-tiered rewards scheme to incentivise action regardless of a win or loss, FTW agents were able to learn a higher standard of gameplay. Training a population in parallel not only proved to be more efficient but revealed a diversity of approaches, as agents optimised their own unique internal reward signals.

 

Credit: DeepMind

After an average of 450,000 training games, the FTW agents were placed in a tournament with forty human players who they could play with as teammates or adversaries. In order to ensure a fair fight, researchers engineered balance into the game to counter the AI agents’ reaction time advantage over the human eye’s natural frame rate processing limit (lower than the game’s 60 frames per second).

On average, FTW agents achieved a higher win-rate than human players with strong skill scores. The agents learnt how to follow teammates, defend their base and “camp” at the enemies’ base, picking off players as they spawned. They achieved the greatest collaboration in teams of four, although struggled to maintain this as the number of players expanded.

 

Credit: DeepMind

As usual, it’s not the game itself that represents progress here but the evolving capacity for AI agents to develop cooperative behaviour. DeepMind has demonstrated the capability of AI agents to work in small teams, alongside humans and other AI agents, towards a shared goal. The more AI agents can work together to manage uncertain environments and imperfect knowledge the better they will perform when faced with the chaos of the real world.

 

Artificial Intelligence

Google and the Ethics of AI Development


No Comments
Dr Robert Ford Source John P Johnson HBOIn Westworld, Dr Robert Ford ponders the artificial intelligence products he has created. Credit Johnson/HBO

Alexander Vipond

In recent months, Google has faced an internal revolt from its staff. Resignations, internal petitions and protests have been part of a heated debate over a US $9 million contract with the Pentagon to interpret drone footage and improve targeting systems. The contract, part of the Pentagon’s artificial intelligence initiative Project Maven was expected to grow into a $250 million venture.

However within days of the New York Times breaking the story that senior staff and executives were anticipating the fallout and trying to remove the words AI from the discussion, Google Cloud CEO Diane Green announced that the company would not be renewing the contract in 2019.

As Fei-Fei Li, Chief scientist at Google Cloud wrote in one of those emails, “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google”.

The saga raises important questions about AI development. For Google, the company whose motto has evolved from “Don’t be Evil” to “Do the Right Thing”, the contract represented a cultural crisis about the direction of the company.  It also represented a business crisis: face losing extremely talented staff or highly lucrative defense contracts. One of Alphabet Inc’s most important AI subsidiary companies, DeepMind has refused to work on military and intelligence weapons. Against this backdrop, Google is updating its ethics and guidelines to provide a new framework for its AI product development. The outcome remains to be seen but Google has said it will make the document publicly available.

Amazon, IBM and Microsoft have also competed for Pentagon contracts in this field with far less scrutiny. Project Maven will move forward with or without Google. In response to questions at a Google wide meeting, co-founder Sergey Brin stated he feels it will be better for peace if global militaries work with international organisations such as Google, rather than with nationalistic defence contractors.

Herein lies the complexity. The concept of powerful AI is a controversial topic, not only in the US but worldwide, with major industry figures divided on the issue of what could arise from it. Weaponised AI has been at the forefront of the debate. Given the state of US politics right now, it is understandable why Google employees might feel apprehensive about how their algorithms are used.

If we zoom out to the global context, the world is clearly in the midst of a long-term arms race in AI.  Military powers across the world are pursuing a multitude of intelligence and weapons applications for AI.  Vladimir Putin has said whoever masters AI will be ruler of the world; whilst in China, AI is a core component of their national strategy to become the world’s dominant science and technology power.  In both market and military terms, the United States stays out of the competition at its own peril.

Whether you are an employee sitting at your desk in one of the worlds’ biggest tech companies, a president or a dictator, the rules of the game are fluid. Most scientists and programmers share their AI research and code across international boundaries to develop technologies that will assist humanity and for commercial gain through shared knowledge platforms. But without commonly agreed ethical rules and guidelines, the exploitation of AI knowledge by individual countries acting in their own self-interest is likely to lead to a race to the bottom and a tragedy of the commons. Efforts like the Campaign to Stop Killer Robots face the allure of the cheap asymmetric power that AI technologies can offer; and the range of AI’s dual-use applications means it is unlikely that a global ban would be effective.

As the race spreads and gathers pace, so too must the tenor and tempo of the AI debate.  Controversies within multi-national companies can be usefully redirected into an open discussion on how best to bridge private and public, national and international interests.  At the top of the agenda should be clear operating guidelines to mitigate accidents, prevent atrocities and safeguard basic human rights. International politics in an era of heteropolarity is difficult but as the inner turmoil at Google reveals, creative leadership, public accountability and a modicum of global governance are vital if human and artificial intelligence are to coevolve ethically and peacefully.