Personal voice assistants and facial recognition devices were tested as a way to prove that imbedded biases exist in technologies. The experiment proved that there was a considerable delay in the recognition of female and black faces in comparison to male and white faces. This video features only the most extreme examples. The interview with Siri was not included (only an audio snippet at the beginning) however Siri’s data was found to be incomplete.
Thinking that AI systems are biased comes as unnatural to us as thinking that maths may be sexist or racist. We assume that technologies are neutral because in the enlightenment, quantitatively measured data was understood as an abstract extraction from the world. This abstractness was never disputed which allowed AI systems to be presented to the world as ownerless entities that capture and reproduce an ‘objective human knowledge’ that is ‘universal’[i]. The tech companies that are creating AI systems uphold that when AI is biased it’s because the data provided is unrepresentative. Thus, it’s the data not the systems that contain the bias. And I’ve argued in past entries that the data is unrepresentative because AI creators are uniformly male, middle-classed and white[ii]. But this time I’d like to take it a step further and enter deeply in philosophical trenches.
What if the language and the very knowledge that is inscribed in AI systems is biased? What if an ‘objective human knowledge’ that is ‘universal’ simply doesn’t exist? The premise I propose argues that the genealogy of knowledge that is silently informing AI discourses is reduced to a male legacy, social exclusivism and biological essentialism. In short, the very knowledge of AI systems is biased, in and of itself.
Bear with me while I make my case.

Epistemology is a branch of philosophy that studies the nature and theory of knowledge. I will briefly explain two epistemologies; traditional and feminist. As you may expect feminist epistemology branched out of feminist philosophy and challenges the existing inequalities of traditional epistemology.
In traditional epistemology the nature of the knowing subject is unquestioned because it doesn’t affect the character of the knowledge. So, who holds the knowledge, is unquestioned and, what this knowledge is, is taken for granted. In other words, the business of knowing is casted as ‘S knows that p’ in which ‘S’ is ‘perspectiveless’, universal and unquestioned and ‘p’ is a piece of propositional knowledge. In feminist epistemology the knowing subject who and the knowledge what are interchangeably scrutinised. Moreover, how one is able to know is also questioned. Feminist epistemology challenges both ‘S’ and ‘p’ because the knowing subject and the knowledge a subject has, are not as steadily separable as traditional epistemology maintains.
In practice traditional epistemology[iii] would acknowledge culture is important but secondary in the production of knowledge whereas feminist epistemology would see culture as crucial in shaping individual – and collective – knowledges. Traditional epistemology places individual perceptions hierarchically above collective forms of knowing so that ‘knowing that’ knowledge is more valuable than ‘knowing how’ knowledge.
To exemplify the critique of traditional epistemology we can turn to Richard Foley’s book “The Theory of Epistemic Rationality” in which rational belief and actions are explained as the “account of judgments made from some nonweird perspective about how effectively the beliefs or actions of an individual promote some nonweird goal”. Foley doesn’t explain what he means with ‘nonweird’ which makes the term a regulatory of rationality. There is a tacit normative rule assumed and a hierarchy of knowers in which nonweird is at the top and weird or weirder lower down. Foley is defining an ‘our’ culture in which who is included in ‘our’ is nowhere explicitly stated.

This means that ‘our’ is either indisputable for the knower or epistemically above others. In the epistemic formula it means that ‘S’ is an undefined knowing individual that impartially observes the world and whose senses cannot be called in question. ‘We’ all feel ‘S’, in the sense that ‘we’ all view the same ‘nonweird’ perspective. This ‘we saying’ in traditional epistemology embodies the highest and clearest form of cultural imperialism. It is hegemonically rejecting a plurality of views and condemning those who think otherwise to weirdness.
Although it would be wrong to suggest that AI presents a unified epistemological front, numerous studies demonstrated that the school of thought that is programming AI to recreate human cognitive minds, is based off traditional epistemological tenets. In a participant observation study of AI laboratories, anthropologist Diana Forsythe[iv], found that the so-known ‘knowledge engineers’ often shared the same conception of knowledge. Firstly, they assumed knowledge is ‘universal’ – meaning, equal for all – because it is essentially a cognitive phenomenon. Secondly, they ‘deleted the social’ and ‘the cultural’ from their concepts of knowledge when building expert systems. So instead of understanding knowledge as something that is culturally contingent, reconstructed over time and different for every individual – as in the social science conception of knowledge – they understood knowledge as a constant, stable entity that can be programmed. For philosopher Hubert Dreyfus[v] AI is doomed to fail precisely because in valorising the ‘knowing that’ knowledge over the ‘knowing how’ knowledge it leaves considerable ways of knowing unsatisfied.
Moreover, ‘expert’ human knowledge is also a central focus in AI. We consider ‘expert knowledge’ a valuable commodity acquired through years of education and training. Our society often places ‘expert’ knowledge hierarchically above the knowledge of other individuals and groups. Yet unfortunately, ‘experts’ – from high court judges to medical consultants and university professors – are still predominantly white, middle-class and male. For researcher Alison Adam, the process is entirely circular for “the more society values such knowledge, the more the possessors of that knowledge are able to develop the means, including the kind of advanced technology to maintain hegemony”[vi].
All of this means that the ability of an AI to think like a human is based on an interpretation of thinking and knowing that is predominantly located in the Western rationalist tradition. This tradition articulates hegemonic accounts such as female/male, black/white, other/self, where the first in every pair is placed epistemically and hierarchically beneath the second. AI is not neutral, and it is not value-free. The very fabric of the methods and practices that AI is built upon, the social and cultural constructions of technology, and finally, the epistemology of the Western sciences of knowing, all contain sexist, racist and Eurocentric biases. It is irresponsible (in the least) not to ponder upon the objectivity and universalism of the knowledge we are inscribing in these technologies.
I rest my case.
[i] Adam, A., 1995. Artificial Intelligence and Women’s Knowledge – What Can Feminist Epistemologies Tell Us. Women Stud. Int. Forum, 18(4), pp.407-415. / Adam, A., 2000. Deleting the Subject: A Feminist Reading of Epistemology in Artificial Intelligence. Journal for Artificial Intelligence, Philosophy and Cognitive Science, 10(2), pp.231-253.
[ii] Find the other entries at: https://iewomen.blogs.ie.edu/women-tech/
[iii] Which can be found in Robert Audi’s Epistemology (1998)
[iv] Forsythe, D.E., 1993. Engineering Knowledge: The Construction of Knowledge in Artificial Intelligence. Social Studies of Science, 23(3), pp.445-477.
[v] Dreyfus, H.L., 1979. What computers can’t do : the limits of artificial intelligence. Rev. ed. ed. New York ; London: New York ; London : Harper & Row. / Dreyfus, H.L., 1992. What computers still can’t do : a critique of artificial reason. Rev ed. ed. Cambridge, Mass. ; London: Cambridge, Mass. ; London : MIT Press.
[vi] Adam, A., 1993. Gendered knowledge — Epistemology and artificial intelligence. The Journal of Human-Centred Systems, 7(4), pp.311-322.