Police and security forces around the world are testing automated facial recognition systems as a way to identify criminals, fugitives and terrorists. But how accurate is this technology and how easily can it be used, alongside artificial intelligence – responsible for its activation – as a means of oppressing citizens?
Imagine the following situation: a terrorist suspect leaves for a suicide mission in the densely populated center of a city. If he shoots a bomb, hundreds of people may die or be seriously injured.
Security cameras that scan faces in the crowd identify the man and compare his features with photos from a database of known terrorists or “people of interest” to the security services.
The system fires an alarm and fast-acting anti-terror forces are sent to the scene where they kill the suspect before he can fire the explosives. Hundreds of lives are spared. Technology saves the day.
But what if facial recognition technology was wrong? What if he was not a terrorist, but just someone unlucky enough to look like one? An innocent would have been summarily eliminated because he relied too much on a fallible system.
What if this innocent person were you?
This is just one of the ethical dilemmas posed by this technology and the artificial intelligence that underpins it.
Training machines to “see” – recognize and differentiate objects and faces – is notoriously difficult. The computer view, as it is called, was having difficulty until recently to differentiate a cookie from a chihuahua – a test for this tool.
Technical limitations and skewed databases
Timnit Gebru, a computer scientist and technical co-leader of Google’s Artificial Intelligence Team, says face recognition has a harder time differentiating between men and women the darker the skin tone. It is much more likely that a dark-skinned woman is mistaken for a man than a lighter-skinned woman.
But there are also problems in the choice of information used to train these algorithms. “The original datasets are mostly white and masculine, very skewed against darker skin types – there are huge error rates for skin tone and gender.”
According to her, “about 130 million US adults are already in face recognition databases.” The country has 327 million inhabitants.
Because of these problems, the San Francisco city of California recently banned the use of technology by transportation agencies and police forces. The measure signals an admission of imperfections and threats to civil liberties. But other American cities and countries around the world are increasingly testing the tool.
An example are the police forces in South Wales, London. Manchester and Leicester have also tested the technology under criticism from civil liberties organizations such as Liberty and Big Brother Watch, both concerned about the number of false positives generated by the systems.
That is, innocent people are mistakenly identified as potential offenders.
“A biased bias is something that should concern us all,” Gebru said. “Forecast policing is a high stakes scenario.”
Reinforcement of racial prejudice
As blacks account for 13 percent of the US population, but 37.5 percent of the nation’s prison population, poorly written algorithms fueled by biased data may predict that blacks are more likely to commit crimes.
It does not take a genius to figure out what implications this can have for policing and social policies.
This week, academics at the University of Essex in England have concluded that data crossings used in experiments by the London Metropolitan Police were wrong 80 percent of the time and could lead to serious deviations from justice and violations of citizens’ rights to privacy.
A British man, Ed Bridges, began a legal battle over the use of facial recognition by the South Wales Police after his picture was taken while he was shopping, and UK Information Commissioner Elizabeth Denham expressed concern about the lack of legal framework for the use of such an appeal.
But such concerns have not prevented Amazon from selling its Rekognition FR tool to US law enforcement despite the disgruntled revolt on the part of shareholders.
Amazon says it has no responsibility for how customers use the instrument. But it’s enough to compare that attitude with that of Salesforce, a technology company that developed its own recognition tool called Einstein Vision, to realize that more business engagement is needed.
“Facial recognition technology can be adopted in a prison to track down prisoners or avoid gang violence,” said Kathy Baxter, Salesforce’s Artificial Intelligence ethics practice manager, to the BBC. “But when the police wanted to use it when arresting people, we considered it inappropriate.”
“We need to ask ourselves if we should use Artificial Intelligence in some scenarios.”
Even more so because facial recognition is also being used by the military, with technology vendors claiming that their software can not only identify potential enemies but also discern suspicious behavior.
Yves Daccord, the director-general of the International Committee of the Red Cross, is concerned about these facts.
“The war is high tech today – we have autonomous drones, autonomous weapons, making decisions between combatants and noncombatants.” “Are their decisions correct? They could have a mass destruction impact,” he warned.
However, there seems to be a growing global consensus that recognition technology is far from perfect and needs to be regulated.
“It is not a good idea to leave AI in the hands of the private sector because it can have a great influence,” concludes Chaesub Lee, director of the telecommunication standardization office of the International Telecommunication Union.
“The use of good quality data is essential, but who guarantees that these are good data? Who guarantees that the algorithms are not biased? We need a multidisciplinary approach from the various stakeholders.”
Until then, facial recognition remains under suspicion.