65th ISI World Statistics Congress

65th ISI World Statistics Congress

Zero-order parallel sampling

Conference

65th ISI World Statistics Congress

Format: IPS Abstract - WSC 2025

Abstract

Finding effective ways to exploit parallel computing to speed up MCMC convergence is an important problem in Bayesian computation and related disciplines. Here we consider the zero-order (aka derivative-free) version of the problem, where we assume that (a) the gradient of the target distribution is unavailable (either for theoretical, practical or computational reasons) and (b) we can evaluate the (expensive) target distribution in parallel at K different locations and use these evaluations to speed up MCMC convergence. We make two main contributions in this respect. First, we show that any method falling within a fairly general "multiple proposal framework" can only speed up convergence by log(K) factors in high dimensions. The fundamental limitation of such a framework, which includes multiple-try MCMC as well as many other previously proposed methods, is that it restricts possible moves to the support of the K evaluation points. We state our results in terms of upper bounds on the spectral gap of the resulting scheme. Second, we discuss how stochastic gradient estimators can be used to make better use of parallel computing and achieve polynomial speedups in K. Some of the methods have similarities, but also notable differences, with classical zero-order optimization methods.