نبذة مختصرة : A large part of the explainable AI literature focuses on what explanations are in general, what algorithmic explainability is more specifically, and how to code these principles of explainability into AI systems. Much less attention has been devoted to the question of why algorithmic decisions and systems should be explainable and whether there ought to be a right to explanation and why. We therefore explore the normative landscape of the need for AI to be explainable and individuals having a right to such explanation. This exploration is particularly relevant to the medical domain where the (im)possibility of explainable AI is high on both the research and practitioners’ agenda. The dominant intuition overall is that explainability has and should play a key role in the health context. Notwithstanding the strong normative intuition for having a right to explanation, intuitions can be wrong. So, we need more than an appeal to intuitions when it comes to explaining the normative significance of having a right to explanation when being subject to AI-based decision-making. The aim of the paper is therefore to provide an account of what might underlie the normative intuition. We defend the ‘symmetry thesis’ according to which there is no special normative reason to have a right to explanation when ‘machines’ in the broad sense, make decisions, recommend treatment, discover tumors, and so on. Instead, we argue that we have a right to explanation in cases that involve automated processing that significantly affect our core deliberative agency and which we do not understand, because we have a general moral right to explanation when choices are made which significantly affect us but which we do not understand.
No Comments.