Elsevier, Computer Law and Security Review, Volume 38, September 2020
The article examines a number of ways in which the use of artificial intelligence technologies to predict the performance of individuals and to reach decisions concerning the entitlement of individuals to positive decisions impacts individuals and society. It analyses the effects using a social justice lens. Particular attention is paid to the experiences of individuals who have historically experienced disadvantage and discrimination. The article uses the university admissions process where the university utilises a fully automated decision-making process to evaluate the capability or suitability of the candidate as a case study. The article posits that the artificial intelligence decision-making process should be viewed as an institution that reconfigures the relationships between individuals, and between individuals and institutions. Artificial intelligence decision-making processes have institutional elements embedded within them that result in their operation disadvantaging groups who have historically experienced discrimination. Depending on the manner in which an artificial intelligence decision-making process is designed, it can produce solidarity or segregation between groups in society. There is a potential for the operation of artificial intelligence decision-making processes to fail to reflect the lived experiences of individuals and as a result to undermine the protection of human diversity. Some of these effects are linked to the creation of an ableist culture and to the resurrection of eugenics-type discourses. It is concluded that one of the contexts in which human beings should reach decisions is where the decision involves representing and evaluating the capabilities of an individual. The legislature should respond accordingly by identifying contexts in which it is mandatory to employ human decision-makers and by enacting the relevant legislation.