Smart contact lens based facial recognition, one possible future. Glasses based technology is already in use. Image Credit: Mission Impossible 4, Paramount Pictures.
Alexander Vipond
Microsoft President and Chief Legal Officer Brad Smith has called for the United States government to appoint a bipartisan expert commission on how to regulate facial recognition technology. In a blog post on the company’s website, Smith responded to allegations that Microsoft had provided the US Immigration and Customs Enforcement (ICE) with facial recognition technology earlier in January this year. The recent of role of ICE in the deportation and separation of immigrant families in the US has drawn large scale protests and widespread backlash across the country.
The possibility that this technology could have been used for such purposes saw Microsoft in the firing line, frantically seeking to explain the nature of their contract. Smith clarified that the contract is limited to “supporting legacy email, calendar, messaging and document management workloads” and reiterated that Microsoft does not support the policy.
This became the catalyst for Smith to call for the regulation of facial recognition technology. He posed two scenarios for the future use of the technology one positive, one negative.
Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.
Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.
The problem is those futures have already arrived.
In June, Jarrod Ramos murdered five journalists at the Capital Gazette in Anne Arundel County, Maryland over reporting of his criminal past. As he attempted to exit the scene by blending in with people fleeing, he refused to provide identification to local officers and fingerprinting failed. He was identified with facial recognition technology from a collection of state databases.
In non-democratic countries, governments are already rolling out persistent large scale surveillance of citizens using big data and facial recognition. And while the stores at the shopping mall aren’t sharing with each other yet, they have been deploying dual use facial recognition technology to prevent shoplifting and monitor customers’ in-store browsing habits to increase sales.
So, the future is already here. What do we do about it?
Behind the facial match is the data. Who has access to it? How and why is it used? In what situations can different levels of government employ the technology and what restrictions are there on private sector use? How long is biometric data kept for and can it be kept safe?
These are basic questions currently left unanswered by US law and that of many countries around the world. Smith himself raises a series of ethical issues about the legal standards of facial recognition algorithms. The spectrum ranges from persistent passive collection of everyone’s data, to data which is only used in a ticking time bomb situation. This variance reveals both the scale of the technology and the governance space democracies must define.
As Smith notes, this does not absolve technology companies of responsibility. Microsoft has appointed an AI ethics board to help navigate new technologies and vet potential customers. This intrinsic method of fostering good governance is equally important to regulation (although it cannot replace it) in helping create a culture of ethical technology development. This is because companies can resist regulatory frameworks and view them as a series of loopholes to exploit and costs to be avoided. Both are necessary to create an environment in which the potential benefits of facial recognition technology can be reached without its abuse damaging democratic freedom.
However, the distinct problem of dual use applications of AI poses questions that should be answered before the final sign off. The Intelligence Advanced Projects Research Agency (IARPA) in the US has a set of questions it asks of proposed technologies. Questions such as if the technology is leaked, stolen, or copied, would we regret having developed it? Can the technology be made less prone to theft, replication and mass production? What is your estimate for how long it would take a major nation competitor to weaponize this technology after they learn about it? In light of the geo-strategic implications of AI products, major technology companies should consider these questions and add to them.
This addition should be a set of civil questions focused on how a proposed AI technology operates within a democracy. Questions on how the risk of discrimination could be mitigated. How will this technology anonymise and delete data it collects? Can this technology scale to affect democracy, in what ways? Questions like these could help tech companies balance their responsibility to civil society and the international community. This requires wider societal discussions on what rights citizens need in the digital age. This discussion must occur now, for as we have seen the future is already here. It is from this base of regulation and corporate governance that democracies can refine ideas on how to positively contribute to the global debate on AI ethics.
Leave a Reply