Penalized maximum likelihood estimation with nonparametric Gaussian scale mixture errors
Conference
65th ISI World Statistics Congress
Format: SIPS Abstract - WSC 2025
Keywords: mixtures, penalized_maximum_likelihodd, variable selection
Session: IPS 1098 - Statistical inference and estimation in high-dimensional data
Tuesday 7 October 9:20 a.m. - 10:30 a.m. (Europe/Amsterdam)
Abstract
Penalized least squares and maximum likelihood methods have been widely used for simultaneous parameter estimation and variable selection. However, the presence of outlying observations can significantly degrade both estimation accuracy and selection performance. While several robust variable selection methods have been proposed, they often suffer from substantial efficiency loss, mainly due to heavy reliance on additional tuning parameters or modifications to the original objective functions to improve robustness. To address these issues, we propose using a nonparametric Gaussian scale mixture distribution for the regression error. This flexible framework allows the error distribution to adapt to the underlying data structure, providing inherent robustness without sacrificing efficiency. The resulting estimator enjoys desirable theoretical properties, including sparsity and oracle properties. For estimation, we develop a hybrid algorithm that combines the expectation-maximization (EM) approach for the parametric components with a gradient-based method for the nonparametric part. Extensive numerical experiments, including simulation studies and real data applications, demonstrate the robustness and effectiveness of the proposed method.
