This is not a minor issue. The term "artificial intelligence" was no longer the preserve of science fiction novels and computer books. News about exciting advances – such as computers capable of assisting medical personnel with diagnostic tasks or automatically driving unmanned vehicles – are appearing more and more frequently and becoming more and more connected with our lives. However, not all the news is so encouraging. What Buolamwini experienced is not an isolated case: in recent years.
We have seen systems for facial recognition that achieve worse performance4 in black women than in white men, even translators from English to Spanish who perpetuate gender stereotypes. These examples illustrate a Whatsapp Mobile Number List as “algorithmic bias”: systems whose predictions systematically benefit one group of individuals over another, thus resulting in unfair or unequal predictions. But what are the reasons that lead these systems to generate biased predictions?
To understand it, let's start by defining some concepts that will be useful to us throughout this essay: «artificial intelligence» and «machine learning». When intelligence becomes artificial and learns automatically. There are many definitions of “artificial intelligence”. Here we will use a general definition offered in one of the fundamental books of the field, which describes artificial intelligence as the discipline that is in charge of understanding and constructing intelligent (but artificial) entities.