Abstract AI systems designed for human
interaction often include anthropomorphic elements, intended to enhance
usability and acceptability by tapping into, or exploiting, human social
cognition. However, this anthropomorphic design can also lead users to
mistakenly view and treat AI as if it has emotions, intentions or moral
agency. Common solutions, such as explicitly notifying users they are
interacting with an AI, may not effectively prevent these unintended
perceptions. This talk critically examines such strategies, highlighting
their limitations and emphasizes the need to develop approaches that
directly engage with the psychological mechanisms behind our tendency to
anthropomorphize AI systems.
Dorna Behdadi
is a TAIGA postdoctoral researcher in philosophy. Their current
research project investigates the conceptual and normative implications
of perceiving and treating AI systems as social and moral agents. Their
broader research interests include moral agency and responsibility,
moral psychology and applied ethics, particularly the ethical challenges
emerging from interactions between humans and artificial intelligence.