Regional Statistics Conference 2026

Regional Statistics Conference 2026

Advances in Deep Learning for Statistical Inference and Generative Modeling

Organiser

Z
Xingqiu Zhao

Participants

  • Z
    Prof. Xingqiu Zhao
    (Chair)

  • YL
    Yuanyuan Lin
    (Presenter/Speaker)
  • Constrained Neural Networks and Their Applications to Nonparametric Regression and Adversarial Learning

  • GL
    Gen Li
    (Presenter/Speaker)
  • Convergence Theory for Diffusion-Based Generative Models

  • WS
    Wen Su
    (Presenter/Speaker)
  • Deep Nonparametric Inference for Conditional Hazard Functions

  • XH
    Xiangbin Hu
    (Presenter/Speaker)
  • Deep Conditional Generative Learning for Optimal Individualized Treatment Rules

  • Proposal Description

    This invited session will feature four cutting-edge talks that explore innovative applications of deep learning in statistical inference and generative modeling. Each presentation highlights significant advancements, offering new methodologies and theoretical insights to enhance our understanding of complex data structures and improve model performance.

    Talk 1: Constrained Neural Networks and Their Applications to Nonparametric Regression and Adversarial Learning

    The first talk investigates the approximation capacity of ReLU-activated neural networks constrained by size and Lipschitz constants. The speaker will discuss the development of Lipschitz-size-constrained neural networks designed to approximate Hölder smooth functions in Euclidean spaces and low-dimensional Riemannian manifolds. Key applications include multivariate nonparametric regression models, where these networks can circumvent the curse of dimensionality—a critical advancement in high-dimensional data analysis. The talk will showcase a novel structure that leads to optimal convergence rates for estimators and explore the potential of these networks in adversarial learning, demonstrating faster convergence rates compared to existing methods.

    Talk 2: Convergence Theory for Diffusion-Based Generative Models

    The second presentation focuses on diffusion models, essential tools in generative modeling. Although they have achieved remarkable empirical success, their theoretical foundations—especially regarding convergence—remain underexplored. This talk will introduce a sharp convergence theory for both continuous and discrete diffusion models, offering the first rigorous justification for their efficiency and limitations. The speaker will discuss how harnessing intrinsic data structures can enhance convergence guarantees, providing deeper insights into diffusion-based generation that could transform their applications.

    Talk 3: Deep Nonparametric Inference for Conditional Hazard Functions

    In the third talk, the speaker will present a novel approach for nonparametric statistical inference related to conditional hazard functions in survival analysis. This method uses deep neural networks to offer a flexible framework for estimating conditional hazard functions with right-censored data. The discussion will cover nonasymptotic error bounds and functional asymptotic normality for the proposed estimators. New goodness-of-fit and treatment comparison tests will also be introduced, including a tailored test for nonparametric Cox models, with empirical results demonstrating superior performance compared to traditional techniques.

    Talk 4: Deep Conditional Generative Learning for Optimal Individualized Treatment Rules

    The final talk will introduce CG-Learning, a conditional generative learning framework designed to optimize individualized treatment rules in multi-arm settings. Utilizing a Wasserstein generative adversarial network, this approach estimates decision rules that minimize specified risk measures without relying on restrictive structural assumptions. The speaker will detail theoretical properties, including nonasymptotic bounds on regret and mis-assignment probabilities, and demonstrate how these bounds can be refined in low Minkowski dimension covariate spaces. Simulations and real-world applications, such as data from the AIDS Clinical Trials Group, will highlight CG-Learning's superiority over existing benchmarks.

    Conclusion: This session aims to illuminate the frontiers of deep learning in statistical inference and generative modeling. Attendees will gain insights into how these advancements address complex statistical challenges and pave the way for further research across diverse fields.