Pictured: Joy Buolamwini, graduate researcher at MIT Media Lab’s Civic Media Group
Image: Bryce Vickmark
Joy Buolamwini is an graduate researcher at MIT Media Lab and is also the founder of Algorithmic Justice League - an organisation which solves biases in decision-making software. As a computer science undergraduate, Buolamwini worked on a robot which used computer vision to socially interact with humans. She realized that the robot was not able to identify her compared to other light-skinned humans. At the time, Buolamwini thought that people would fix this issue.
But this issue did not get solved. Her anecdote is one of many, including when Google’s facial recognition tagged two black friends as gorillas, when HP’s webcams easily tracked a white face but were unable to track a black one or when Nikon’s cameras continually told at least one Asian user that their eyes were closed. Although this may seem simple, as so many facial recognition softwares struggle with non-white faces, it reminds non-white people that facial recognition is not for them.
According to a study released by researchers at Stanford University and MIT, three commercially released facial recognition softwares from three major tech companies all have gender and skin type bias. In these experiments, the percent error when determining gender of light-skinned males was never larger than 0.8%. However, once determining gender of dark-skinned women, percent error jumped to 20% in one software and over 34% in the other two. These findings raise questions about what kind of data that these neural networks are trained on. After all, the accuracy of a neural network is based upon the variety of data that is provided to it. Although the accuracy of certain softwares was 97%, the data tested was more than 77% male and 83% white.
When companies think about error, they typically look at the statistics. If a program is right 97% of the time, the program is probably accurate enough. But most companies fail to see if the system is making the error randomly over a wide distribution, or only makes mistakes with a certain group of people. So, even if a system is right 95% of the time, it may be wrong with all Asian people in the United States. Or, if a system is correct 99% of the time, it may only make errors with all transgender people in the United States.
But how does one solve this problem? The first step is awareness. Companies must be made aware of the fact that their algorithms and programs need a more diverse data set in order to curb the bias that many programs face. Joy Buolamwini and her partner, Timnit Gebru, hope that their research paper will spur more work into looking at gender and racial disparities in other fields of computer science.