Photo by Carlos Muza on Unsplash
By Mehak Garg
AI has disrupted numerous industries such as communication, healthcare, and transportation by increasing access to information, creating surgical robots, and making self-driving cars. AI applications such as biometrics, deep learning platforms, and AI-optimized hardware are relatively new innovations that have the ability to make even bigger impacts in hundreds of industries. In order to ensure this impact remains positive, it is imperative that we consider the safety and ethics of AI before implementing it further in society.
In the past, we’ve seen how AI has been misused. For example, in mortgage lending, some AI systems discriminate against minority groups even though they were programmed with the intention of being fair. According to researchers at UC Berkeley, “Both online and face-to-face mortgage lenders charge higher interest rates to black and Latino borrowers, costing those homebuyers up to half a billion dollars more in interest every year.” In 2018, Amazon’s hiring system’s use of AI led to a bias where fewer women would be hired and have their resumes looked at. After the incident, they scrapped their system and rebuilt it to mitigate the issue. AI can even be discriminatory in our search bars. For example, women searching for a job on Google are less likely to be shown executive jobs or leadership roles compared to males.
AI ethics are a set of standards that can guide developers in using AI technologies to ensure that the final product is moral and ethical. Technology can be deemed moral if it treats everyone in society equally and doesn’t stereotype different groups based on their income, gender, or race. In order to successfully deploy a set of ethics to the field of AI, researchers must understand how these biases present themselves and the different ways AI technology can discriminate and be unsafe. There are three different levels of bias. The first level is a historical bias that already exists in the data set. The second level encompasses representation and measurement bias which are a result of how the algorithm is programmed. The third level includes evaluation and aggression biases which are a result of choices made when actually programming the algorithm.
In order to make AI safe to use and ethical, researchers have to optimize algorithms to limit the effect of these biases. Similar to how in real life there are multiple ways to solve a problem, there are multiple ways to program an AI. We can use different learning models for different types of problems. In fact, depending on the model, unsupervised learning might be more discriminatory than supervised learning. Supervised learning is when we train the machine as if it were in the presence of a teacher. Unsupervised learning is when we let the machine act on the data without any guidance.
To combat historical bias, researchers and others involved with the AI project should repeatedly check the data set to make sure it represents a diverse sample. Asking sample test questions while combing through the data set, such as which demographic gets the most loans or if females get more loans than men in a loan-centered dataset, can help you find potential areas of bias in the algorithm. As a final check, companies should monitor the real-world results of their algorithms like the demographics that are actually getting loans. Monitoring results regularly allows companies to act proactively and resolve these biases much more efficiently.
Fortunately, more and more companies have been prioritizing ethics and have outlined what an ethical algorithm functions like. Different organizations, private companies, and researchers have established five goals for every AI system. According to Anna Jobin from the Health Ethics and Policy Lab, an AI system is deemed as ethical or safe if it employs transparency, justice and fairness, non-maleficence, responsibility, and privacy. AI can make immeasurable impacts in fields spanning food production to the defense industry. Similar to how employing ethiccs in Amazon’s hiring system helped reduce biases, coupling AI’s potential with ethical guidelines will result in magnified impacts that can further benefit society.