Advances in Bayesian Modeling
Conference
Proposal Description
This invited session intends to present recent methodological and applied advances in Bayesian statistics. The session will consist of talks on: novel methods for eliciting priors based on expert knowledge, adversarial issues in machine learning (ML) and Bayesian strategies to enhance security of ML models, Bayesian predictions based on predictive quantile maps and their use in ML and recent developments in Bayesian modeling of multivariate time series of counts and sequential Monte Carlo methods.
Here are the summaries of the four talks in the session:
(1) Bayesian Adversarial Machine Learning: Adversarial machine learning has emerged to enhance the security of machine learning algorithms. However, this discipline has mainly focused on classification problems and classical approaches based on maximum likelihood estimation (plus possibly a regularizer). This talk covesr ingredients to robustify Bayesian predictive models against adversarial attacks: procedures to forecast likely attacks, protect models both during operations and training, detect attacks and changes in attack patterns, and eventually suggest the need for retraining, and how they integrate in a pipeline.
(2) Precise and Imprecise Knowledge in Bayesian Robustness: Accurate elicitation of expert’s subjective beliefs is known to be challenging. Undertaking a prior robustness analysis by using an appropriate class of prior distributions is therefore recommended. Existing classes of prior distributions are mostly designed to capture misspecification of a prior distribution. Yet, it maybe that an expert is unable to precisely define their beliefs. The talk proposes a new class of prior distributions that allow the expert to specify their prior beliefs even when they are not precisely defined.
(3) Conformal vs Bayesian Prediction: Conformal Prediction (CP) methods are developed for modern-day machine learning. Prediction is a fundamental task and there are a number of approaches, including
conformal prediction, fiducial prediction, marginal likelihood and full Bayes inference. This talk shows that by directly modeling the predictive quantile map as a deep learner, there are a number of theoretical and practical advantages of using quantiles over densities.
(4) Latent Factor Multivariate INAR Models for Counts: The talk will introduce a new class of multivariate integer-valued autoregressive (INAR) models based on the notion of a common random environment. Dependence among the components of the multivariate time series is induced via a common random environment that follows a Markovian evolution. The proposed framework provides a dynamic multivariate generalization of the univariate INAR processes. Markov chain Monte Carlo methods as well as a particle learning algorithm for Bayesian inference are developed.