101A Crowley Hall
Bayesian Regularization and Computation for Graphical Models
To address overfitting, a central issue in statistics and machine learning, many successful techniques are formulated under this mathematical framework known as regularization, which penalizes or adds constraints on the underlying model parameters. In this talk, we introduce a general framework for effective regularization from a Bayesian perspective. In addition to many known advantages, the Bayesian approach is especially appealing to regularization since it provides a natural and principled way of forming penalty through prior distributions and addressing model uncertainty through posterior distributions.
We illustrate the application of our general framework in learning high-dimensional graphical models. The MAP (maximum a posteriori) estimator from our method gives rise to a new non-convex penalty approximating the L0 penalty. Although there has been a surge of research interest on the use of non-convex penalties, non-convex optimization brings both computational and theoretical challenges. We provide a set of theoretical results to quantify the statistical accuracy and efficiency of our MAP estimator, such as optimal error rates or estimation consistency in terms of various matrix norms and selection consistency for sparse structure under mild conditions. For fast and efficient computation, and EM algorithm is proposed to compute the MAP estimator of the precision matrix and (approximate) posterior probabilities on the edges of the underlying sparse structure. Through extensive simulation studies and a real application to a call center data, we have demonstrated the fine performance of our method compared with existing alternatives.
(The talk is based on joint work with Lingrui Gan and Naveen N. Narisetty from Department of Statistics, University of Illinois at Urbana-Champaign.)