Bram Vaassen is a Postdoctoral fellow in philosophy at the Department of
Historical, Philosophical and Religious Studies at Umeå
University.
Abstract: Advancements in machine learning have fuelled the popularity of using AI decision algorithms to streamline procedures such as bail hearings (Feller et al., 2016), medical diagnoses (Rajkomar et al., 2018; Esteva et al., 2019) and recruitment (Heilweil, 2019; Van Esch et al., 2019). Academic articles (Floridi et al., 2018), policy texts (HLEG, 2019), and popularizing books (O’Neill, 2016) alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation (Lombrozo, 2011; Hitchcock, 2012), I raise a moral concern for opaque algorithms that often goes unnoticed: opaque algorithms can undermine users’ autonomy by hiding salient pathways of affecting their outcomes. I argue that this concern is distinct from those typically discussed in the literature and that it deserves further attention. I also argue that it can guide us in deciding what degree of transparency should be demanded. Plausibly, the required degree of transparency is attainable without ‘opening the black box’ of machine learning algorithms.