Explanation Types Revisited: From Counterfactuals to Ensure–enabled Explanations in the Confidence–Uncertainty Space
Conference
Regional Statistics Conference 2026
Format: IPS Abstract - Malta 2026
Session: IPS 1289 - Explainable data science
Friday 5 June 8:30 a.m. - 10:10 a.m. (Europe/Malta)
Abstract
In high–stakes decision–support settings, explanations must not only justify
predictions but also support reasoning about actionable uncertainty. Most
uncertainty–aware XAI methods quantify uncertainty without a structured
mechanism for selecting explanations that reduce it. We introduce Ensure–
enabled Explanations, a framework characterising explanation candidates by
their joint effect on model confidence and uncertainty. The framework defines
four explanation types, ensured, counterpotential, semipotential and superpo-
tential, that complement counterfactual, semifactual and superfactual expla-
nations by separating interpretability from actionable uncertainty reduction.
Ensure–enabled Explanations characterises and selects among candidates
produced by an existing explainer, rather than generating explanations itself.
To illustrate the framework in practice, we extend Calibrated Explanations
by operationalizing actionable uncertainty as calibrated prediction–interval
width, introducing filtering methods and a parameterised ensured ranking
criterion that trades confidence against uncertainty reduction, and analyse
its behaviour across 49 datasets covering binary classification, multiclass and
regression tasks. We further map representative methods (CLUE, δCLUE,
DIVERSE CLUE, interactive example–based explanations and QUCE) into
the framework, clarifying when they merely report uncertainty versus en-
abling controlled reduction. The contributions provide a unified vocabulary
and selection principle for uncertainty–aware XAI.