Regional Statistics Conference 2026

Regional Statistics Conference 2026

Trustworthy AI in Automotive Ecosystems: A Distributed Responsibility Perspective Grounded in Statistical Reliability

Conference

Regional Statistics Conference 2026

Format: IPS Abstract - Malta 2026

Keywords: reliability, responsibleai, transparency, trustworthiness

Session: IPS 1289 - Explainable data science

Friday 5 June 8:30 a.m. - 10:10 a.m. (Europe/Malta)

Abstract

The rapid integration of artificial intelligence and data science into the super sport vehicle industry has enabled a growing portfolio of intelligent applications, spanning vehicle connectivity, customer journey modeling, customer scoring, and decision-support systems, across the full lifecycle of high-end automotive products. While these capabilities advance personalization and operational efficiency, they simultaneously introduce pressing challenges around trust, reliability, and accountability in high-stakes decision-making contexts.
Existing frameworks for responsible and trustworthy AI tend to focus on model-level properties such as explainability, fairness, and transparency. These approaches, however, commonly assume a centralized notion of responsibility in which individual models or components are evaluated against isolated criteria. Real-world automotive ecosystems defy this assumption: they are inherently distributed and complex, involving multiple stakeholders, interconnected data pipelines, heterogeneous models, and layered decision processes. In such environments, responsibility cannot be attributed to any single component; it emerges from the interactions of the system as a whole.
This work advances the argument that achieving responsibility in AI systems is fundamentally contingent on the correctness, reliability, and maturity of their underlying statistical methodologies. We foreground the central role of uncertainty quantification, calibration, and robustness under distributional shift as prerequisites for trustworthy decision-making. This establishes an intrinsic coupling between statistical validity and responsible AI—positioning statistics not merely as a supporting tool, but as a core enabler of trustworthy intelligent systems.
Drawing on a series of studies spanning multiple automotive applications, including connectivity-driven services, sequential customer-journey prediction, and customer value assessment, we develop a distributed-responsibility perspective for AI and data science systems. This perspective treats responsibility as a system-level property that emerges from the interplay between data generation processes, modeling assumptions, deployment contexts, and human engagement. Through this lens, we show that weaknesses in statistical reliability at any stage can propagate across the system, with direct consequences for trustworthiness outcomes.
We further examine the implications of this perspective within the context of emerging regulatory frameworks, most notably recent European initiatives on AI governance, which increasingly demand demonstrable reliability, transparency, and compliance across the full AI system lifecycle. We contend that aligning these regulatory requirements with statistically grounded methodologies is essential for achieving both technical robustness and sustainable compliance.
Taken together, this work provides a foundation for moving beyond model-centric conceptions of responsibility toward ecosystem-level accountability. It underscores the need for AI systems that are statistically reliable, uncertainty-aware, and transparently designed, and contributes to the broader discourse on trustworthy AI by affirming the indispensable role of rigorous statistical practice in enabling responsible innovation within complex, real-world industrial environments.