Hadeel Naeem

c:o/re Junior Fellow 10/23 – 09/24

photo by Jana Hambitzer

Hadeel Naeem works in epistemology, the philosophy of cognitive science, and the philosophy of technology. She holds a PhD from the University of Edinburgh, where she engaged in research on the extended mind and extended knowledge debates. Her PhD dissertation contends that some level of personal participation is required for knowledge: we cannot attribute knowledge solely on the basis of our cognitive processes operating properly. Her current research focuses on how to responsibly extend the mind with AI systems and therefore acquire extended knowledge.

Belief attribution in human-AI interaction

I am interested in understanding how we recruit AI systems to form beliefs, when such recruitment results in extended beliefs, and how we ought to attribute these beliefs. According to the extended mind thesis, we sometimes employ external resources so that our mental states, such as beliefs, are partially realised in these external resources. In these cases, our beliefs are extended into the environment, and we need an account of why they ought to be attributed to us. In cases where these external resources are non-autonomous systems, like notebooks or phones, we can ascribe beliefs to agents because they integrate these resources into their cognitive systems and take responsibility for them. AI systems, however, are autonomous and can initiate and monitor their own integration into our cognitive systems and, therefore, take responsibility for their own employment. Thus, it’s not clear how we ought to attribute the extended beliefs formed by AI extension.

With an account that explains how we recruit AI systems to form beliefs and how these beliefs ought to be attributed, I will provide new perspectives on some current issues relating to human-AI interactions. One major motivation for this project is to understand how medical practitioners and AI systems form diagnostic medical beliefs or beliefs about adequate medical procedures. When such beliefs are partially realised in the AI system and partially in the surgeon’s brain, it’s not clear who ought to be blamed if the surgery or the diagnosis were to go wrong.

Overall, my proposed study deepens our understanding of AI, enriches AI explainability, and cultivates trust in AI.

Publications (selection)

Naeem, Hadeel. 2023. Is a subpersonal virtue epistemology possible? In Philosophical Explorations, 26(3): 350-367. https://doi.org/10.1080/13869795.2023.2183240