Artificial intelligence and machine learning algorithms generate prejudicial outcomes which can entrench racism, sexism, and discrimination in society. If our technology is to help rather than hinder the project of human emancipation, we need to incorporate an equitable piloting code.
The idea that AI is racist or the claim that there is bias in the coding of machine learning algorithms is ubiquitous and quickly rising to the status of a matter-of-fact. A survey of academic literature reveals four starting positions from which to consider these claims: (1) algorithms are neutral; (2) algorithms are discriminatory; (3) data are neutral; and (4) data are contaminated. In other words, there are those that argue technology is impartial and refers only to data ‘as it is,’ thus suggesting the problem is external to the system; those that argue the present coding is inherently and substantively problematic, leading to prejudicial outcomes; those that argue the data and algorithms are neutral so the real failure is our inability to accept things ‘as they are;’ and finally, those that argue that the data any technology interprets are themselves discriminatory.
What is not in dispute, however, is that AI generates outcomes that are legitimately racist, sexist, bigoted, and discriminatory – and, furthermore, that there is no humane or moral basis upon which to defend this reality. Here are some examples that rest the case: When Google translates a language without gendered pronouns (Hungarian, for example), and is, therefore, asked to make ‘gendered assumptions,’ the algorithm returns: She is beautiful. He is clever. He reads. She washes the dishes. He builds. She sews. He teaches. She’s a cleaner. He is a politician. She’s an assistant… When minority groups in America engage with financial institutions, automated loan applications and crediting software default these applicants into the ‘higher risk’ category. When African Americans create photo albums with pictures of friends, Google Photos labels the album ‘Gorillas’. Finally, when women use employment search engines, they are presented with higher-paying technical jobs far less frequently than men.
So, what is going on? To understand this, we need to understand how machine learning algorithms and AI work. At their most basic, machine learning algorithms are pattern-matching and fit-finding devices. The more an algorithm sees X data adjacent to Y data, the more the algorithm will associate X with Y, and consequently, when asked about Y in the future will recall information associated with X as well. Additionally, algorithms work on the principle of ‘closest fit.’ The technology will do everything it can to ‘help’ you by returning information that is an approximation of what you asked for. If you asked for X, Y, Z, A – but the algorithm can only find X, Y, Z, B – that’s what it will show you, since often times, close enough is good enough. This kind of operation is extremely useful when you type in ‘where is the nurest hspitoial’and Google helpfully returns: “Showing results for where is the nearest hospital.”
Ultimately, the principles of pattern-matching and fit-finding are embedded in machine learning to save time and correct for human error. There is, however, another word for pattern-matching and fit-finding: bias. That is to say, all machine learning has a bias towards patterns and matches. Indeed, without patterns and matches an algorithm can’t really do anything at all. For example, if you give an algorithm a set of ten random, made up objects like “C$%£¢FFF*L” and ask it to sort them into the three invented categories: “;” “—;” and “D”, the algorithm has no reference either to the input or command and can’t operate the function. A human being, on the other hand, will complete the job and explain the basis upon which it made those decisions after the fact. Why is this? What’s the difference between the human decision-making process and the artificial one? But, before we move onto the distinct faculty human beings possess, we need to talk more about the similarities between human and AI decision-making.
Ironically, the logic that underpins AI’s discriminatory outcomes – i.e., bias – is precisely the logic that underwrites our own, human, all too human, prejudices. Indeed, the human mind has been hard-wired for bias for the same reason we programmed the machine this way – to save time and correct for error. The evolutionary mechanism that helped early humans learn the difference between a lion and a duck, and recognise which tribe to return to after a hunt, is the condition of possibility for prejudice, racism, sexism, and discrimination, generally speaking. The cognitive code David Hume called constant conjunction, makes us equally susceptible to malign propaganda and enlightened education. The more we see or hear something associated with something else, the more we necessarily associate the one with the other. The key difference in humans, however, and the only reason the arc of history bends towards justice, is that we (unlike our digital counterparts) have the capacity to consciously interpret patterns and matches. The machine does not ask ‘why’ the pattern matches or it finds a fit – and even if it did, it certainly doesn’t have a means of synthesising and contextualising the answers to these questions in any meaningful way. But it can, and it should.
Matt Kusner, associate professor in the Department of Computer Science at University College London, and Joshua Loftus, assistant professor in the Department of Technology at New York University, have recommended precisely this kind of thing: a piloting code that helps algorithms ‘interpret’ their matching and fitting. Kusner and Loftus refer to their solution as a three-test causal model. The model triangulates algorithmic returns against counterfactuals, sensitivity, and impacts. They advocate the creation of “models that capture or account for the causal pathways” that give rise to discrimination.
When we initially designed algorithms we only endowed them with the efficiency and accuracy logic of our bias, and not the contextual triangulation of our interpretation. This was a mistake. An understandable oversight, but something that must be corrected. James Zou and Londa Schiebinger, two Stanford University researchers, pursue a similar ambition. Zou and Schiebinger want to see programmers “incorporating constraints” in the development of AI and machine learning to ensure that algorithms reduce “dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.”
In the final analysis, algorithms are neutral, but they are incomplete. It is their incompleteness that ensures they necessarily generate prejudicial outcomes. Data are data, but they require interpretation. Absent interpretation, data are never neutral. The fundamental problem is that we have deployed ‘insufficient’ algorithms into an environment that is historically structured by power imbalances and discrimination. As a result, an algorithm encounters far too many racist, sexist, and discriminatory fits and matches ‘in the field.’ While the algorithm looks for patterns and matches in data, we must bear in mind that data are definitionally a record of the past. You can’t record future data, and the minute data has been collected, they are instantly old. Therefore, if the past was racist (which it most assuredly was), the data will necessarily be racially skewed; and if the past was sexist (which it most assuredly was), the data will necessarily be gendered. Critically, as with our own discourse, narrative, history, and society writ large, if today is an exact match of yesterday, we will never progress.
Our algorithms, therefore, need to be as progressive as we hope to be. This means instilling them with some form of equitable piloting code that interprets their fits and matches to account for the historically determined reality in which they operate. If anything good can be said to have come from the litany of algorithmic misdeeds, it is that our technology has held up a mirror to reflect just how racist, sexist and discriminatory our society is. Therefore, let us not allow our newest inventions to take us backward. Instead, let’s ensure machine learning and AI participate in the great project of progressive emancipation to which all human endeavour should aspire. We have the helpful bias of the algorithm. It’s past time we incorporated the interpretive faculty of an equitable piloting code.
Leave a Reply