Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science
Description of Resource
Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of uncertainty about conclusions is inevitable even in entirely human-based empirical science because in induction there’s always a risk of getting it wrong. Keeping this in mind may help appreciate that requiring full transparency from AI systems before epistemically trusting their outputs might be unusually (and potentially overly) demanding.