Bernoulli Society New Researcher Award
Conference
Category: Bernoulli Society for Mathematical Statistics and Probability (BS)
Abstract
Rianne de Heide
An e-value is a nonnegative random variable whose expected value is at most one under the null hypothesis. It is a fundamental concept in hypothesis testing, yet they have not been studied under a unified umbrella until about 5 years ago. Today, it is a fast-growing area of research. E-values are the fundamental building blocks for any-time valid inference but also turn out to provide some remarkable results besides sequential testing. We present a necessary and sufficient principle for multiple testing procedures controlling an expected loss, such as FDR. This principle asserts that every such multiple testing method is a special case of a general closed testing procedure based on e-values.
*Based on joint work with Neil Xu, Aldo Solari, Lasse Fisher, Aaditya Ramdas, Jelle Goeman, Peter Grünwald and Wouter Koolen.
Snigdha Panigrahi
In this talk, I will introduce a new cross-validation method based on an equicorrelated Gaussian randomization scheme. The method is well-suited for problems where sample splitting is infeasible, such as when data observations violate the assumption of independent and identical distribution. Our method constructs train-test data pairs using externally generated Gaussian randomization variables. The key innovation in our proposal is to employ a carefully designed correlation structure among the randomization variables, which we refer to as antithetic Gaussian randomization. We show that this correlation is crucial in ensuring that the variance of our cross-validated estimator remains bounded while allowing the bias to vanish with just a few train-test repetitions. This desirable bias-variance property of our cross-validated estimator extends to a wide range of loss functions, including those commonly used for fitting generalized linear models.
Lihua Lei
The synthetic control method is widely used for estimating the average treatment effect with one treated and a few control units. Standard asymptotic inference is often unreliable due to small NNN and practical simplifications. Permutation inference, such as the placebo test, offers finite-sample Type-I error guarantees under uniform treatment assignment without simplifying the method. However, it suffers from low resolution, as the null distribution is based on only NNN estimates, limiting inference at levels like α=0.05\alpha = 0.05α=0.05. We introduce a leave-two-out procedure that maintains the same Type-I error guarantee while allowing valid inference even when α<1/N\alpha < 1/Nα<1/N. It often achieves lower unconditional Type-I error and higher power when effect sizes are moderate. The method generalizes to non-uniform assignment and supports sensitivity analysis. It represents a novel form of randomization inference, distinct from traditional permutation or rank-based approaches, particularly effective for small samples. This approach improves the interpretability and credibility of inference in applications such as policy evaluation, economics, and public health, where treated units are rare and data are limited. Extensive simulation studies and empirical examples demonstrate the practical advantages of our method. It offers a promising direction for robust causal inference when classical methods break down due to limited data or complex assignment mechanisms. The framework is computationally feasible, easy to implement, and adaptable to a variety of experimental and observational designs.