What would it look like to live in a world without labels, a future without labels? Where even the technology created did not label you by gender, race, creed, sexuality and accessibility challenged. The challenge when it comes to technology is that, as much as we try and avoid it, bias can intentionally and unintentionally be programmed into the design. Diversity in the humans designing the tech as well as the diversity of the humans using the tech can be the key to overcoming these limitations.
Facial recognition software has had problems in America where law enforcement has embraced the technology using it to track and apprehend suspects. NIST, the National Institute of Standards and Technology conducted a study of the facial recognition software created by companies such as Microsoft, Intel, Panasonic, SenseTime, and Vigilant Solutions amongst 99 other participants and researchers found that facial-recognition software produced higher rates of false positives for black people and Asian people than whites. The software had a higher rate of false positives for those groups by a factor of 10 to 100 times, depending on which algorithms were used. This has created a push back; ACLU senior policy analyst Jay Stanley said the NIST study is evidence that facial recognition is a “dystopian technology” and called on government agencies to stop using it. The ACLU has consistently opposed facial recognition and issuing the federal government to release information about their use of facial-recognition software made by Amazon and Microsoft. “Even government scientists are now confirming that this surveillance technology is flawed and biased,” he said. “One false match can lead to missed flights, lengthy interrogations, watchlist placements, tense police encounters, false arrests, or worse.”
The NIST study confirms existing research that has shown racial and gender bias of facial-recognition technology. “While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,” the NIST researcher Patrick Grother, the report’s primary author, said in a statement. “This data will be valuable to policymakers, developers and end-users in thinking about the limitations and appropriate use of these algorithms.” The NIST study also found that facial-recognition software made by Asian companies was less likely to misidentify Asian faces. “These results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data,” Grother said in a statement.
In a move in the right direction and given that a person’s gender cannot be inferred by appearance, Google has replaced the label man and woman from its Cloud Vision API by using a non-gendered specific “person” label. Many facial analysis and facial recognition systems on the market today predict gender but have challenges identifying people who do not conform to gender norms, people who are transgender, and women of colour. Labelling is used to categorise images and train machine learning models, but Google is removing gender labels because it violates Google’s AI second principle. Google’s second AI principle stated in full reads, “AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognise that distinguishing fair from unfair biases is not always simple and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.” This visionary step really sets Google ahead of the pack in the tech game.
Litigation, for the most part, has been the main motivation of making learning easily available to people who are accessibility challenged in higher education institutions. People who are blind, deaf and have learning disabilities face very different challenges to accessing and enjoying new technology, especially in a University setting. In reference to our earlier blog, human-centred design, technology cannot solve humanity’s problems by itself, it will always need the guidance of humans to be effective. Learning institutions are starting to work together with tech companies to address the problems of everyday accessibility challenged students who struggle with the technology that is reliant on having all our senses.
To meet the needs of where a human is at and take them where they want to go and achieve is only one of the goals we’re striving for in a future without labels. Inclusivity and leaving no human behind are also key to making our future look brighter, a future without labels.
Connect with Andrew Butow on LinkedIn and stay tethered to the latest from Earth2Mars.