Statistical modelling and machine learning for healthcare and personalized medicine
Conference
Proposal Description
Statistical modelling and machine learning for healthcare and personalized medicine
Personalized medicine improves healthcare by tailoring prevention, diagnosis, and treatment to an individual’s pathophysiological characteristics, leading to more effective and safer care. However, its practical implementation is challenged by data complexity, integration, and scale, challenges that data science, statistics and machine learning need to address to extract actionable insights from large, heterogeneous datasets. Statistical emulators, such as Gaussian processes and Bayesian surrogate models, enable efficient approximation of complex mechanistic or simulation-based models, allowing rapid exploration of patient-specific scenarios. Uncertainty quantification and correction for model discrepancy provides a principled framework for assessing prediction reliability, propagating measurement and approximation errors, and supporting risk-aware personalized clinical decisions. Combining these approaches with modern deep learning promises to enhance accuracy and robustness, paving the path to adaptive, data-driven personalized decision-making in the clinical. Our session will feature four talks representative of the current state of the art.
Talk 1 will discuss engineering methodologies to personalize treatment approaches for cardiac arrhythmias. This is based on a combination of signal and image processing, machine learning and computational modelling techniques, using clinical imaging data and electrical recordings. The ultimate aim is to translate these tools to predict optimal patient-specific treatment strategies in the clinic, working closely with clinical, basic science and industrial collaborators.
Talk 2 focuses on pulmonary hypertension, a serious condition that can lead to heart failure if undetected. Current diagnosis relies on right-heart catheterization, an invasive procedure with risks such as internal bleeding. Advances in fluid dynamics and medical imaging now allow blood pressure to be estimated non-invasively from blood-flow data. However, these models require patient-specific parameters, like vessel stiffness, which cannot be measured directly. We will discuss emulation strategies for robust parameter inference and model calibration, and will highlight the need to account for model discrepancy to ensure reliable uncertainty quantification and risk assessment.
Talk 3 covers two cardiac diseases: myocardial infarction and cardiomyopathy. Patient survival could be improved if clinicians could determine patient-specific tissue properties, such as fibre stiffness and contractility, but these cannot be measured in vivo. Advances in soft-tissue mechanics now make it possible, in principle, to infer such parameters from cardiac MRI. However, current methods are so computationally intensive, often requiring weeks on high-performance systems, that clinical use is limited. We will discuss how modern emulation techniques based on Gaussian processes and physics-informed graph neural networks can greatly reduce computational cost with minimal loss of accuracy, enabling an important step towards personalized healthcare and clinical decision support.
Talk 4 will present a data-processing workflow that uses deep learning and Gaussian processes to automatically extract the patient-specific quantities needed for cardiac-model inference from magnetic resonance images. This work focuses on image segmentation and automated estimation of key clinical biomarkers, including left ventricular volume and ejection fraction. The methods are evaluated on two patient cohorts recovering from myocardial infarction or Covid-19. We will also discuss challenges in transfer learning and how features learned from current patient cohorts can improve inference in future ones.