Man Proves Artificial Intelligence Is ‘Racist’ By Attempting To Get Passport Picture Approved
A TikTok user has used his experience of trying to apply for a new passport to demonstrate how racial bias can also exist in artificial intelligence (AI).
Joris Lechêne, a model and artist who also works to educate people on racial bias, explained how his passport photo was rejected by an automated identification software that struggles to distinguish Black faces.
Sharing a screenshot of the rejected photo, Lechêne wrote, ‘In the process of applying for a British passport, I had to upload a photo, so I followed every guideline to the T and submitted this melanated hotness.’
@joris_explainsAutomation is never diversity-friendly. ##LearnOnTikTok ##diversity ##technology ##artificialintelligence ##ukcitizen ##bias♬ Clock It – Clutch
‘Lo and behold, that photo was rejected because the artificial intelligence software wasn’t designed with people of my phenotype in mind,’ he said, explaining, ‘It tends to get very confused by hairlines that don’t fall along the face and somehow mistake it for the background and it has a habit of thinking that people like me keep our mouths open.’
According to information provided by the software, which is used by the UK government as part of its online passport application process, Lechêne’s photo ‘doesn’t meet all the rules and is unlikely to be suitable for a new passport’. Despite being clearly distinguishable from his grey background, and with his mouth closed, the software explained that the image had been rejected because Lechêne’s ‘mouth may be open’ and it was ‘difficult’ to tell him apart from the backdrop.
Lechêne went on to say that his experience is just one example of how racial bias is built in to AI and automated software, and goes to show that ‘robots are just as racist as society is’.
This is just a reminder that, if you believe that automation and artificial intelligence can help us build a society without biases, you are terribly mistaken.
While there is a common conception that automated software is inherently objective, researchers into ethical AI have warned that software in fact incorporates human biases due to the lack of diversity in data and programming. ‘Society is heavily skewed towards whiteness and that creates an unequal system.’ Lechêne explains. ‘And that unequal system is carried through the algorithm.’
Lechêne’s experience is just one of a host of examples of facial recognition technology in particular struggling to distinguish Black faces. The New York Times has previously reported incidences of Google Photos identifying Black people as ‘gorillas’, while a 2018 ACLU investigation found that a facial recognition program developed by Amazon was more likely to wrongly identify Black people as criminals.
If you have a story you want to tell, send it to UNILAD via [email protected]
Most Read StoriesMost Read