High Dimensional Linear Regression: Mean Squared Error and Phase Transitions

Wednesday, April 5, 2017 - 4:30pm

Event Calendar Category

LIDS & Stats Tea

Speaker Name

Ilias Zadik

Affiliation

ORC

Building and Room Number

LIDS Lounge

In this talk we will focus on the sparse high dimensional regression Y=X\beta^{*}+W where X is a n\times pmatrix with i.i.d. standard normal entries, W is a n\times 1 vector with i.i.d. N(0,\sigma^{2}) entries and \beta^{*} is a p\times 1 binary vector with k entries equal to unity (k-sparse) under the assumptions that \log k=o(\log p) and \sigma^2=o(k). The goal is to recover the vector \beta^{*} from observed X and Y with high probability (w.h.p). 

In the literature some remarkable papers by  Candes, Tao, Romberg, Wainwright and others show that LASSO (l_1-constrained quadratic programming), Orthogonal Matching Pursuit and Basis Pursuit Denoising Scheme can recover w.h.p. \beta^* as long as n > 2k \log p but no mechanism can recover w.h.p. \beta^* with n<n^*=2k log p/ \log (1+2k/\sigma^2) samples.

In this talk, we will present some new results for the maximum likelihood estimator of the problem which closes the above gap from an information theoretic perspective. We show that the performance of the maximum likelihood estimator obtains a sharp phase transition around n^*, that is if for some \epsilon>0, n>(1+\epsilon)n^* then the maximum likelihood estimator w.h.p. is recovering almost all of the support of \beta^*, but if n<(1-\epsilon)n^* w.h.p. it fails to recover any positive fraction of the true support.

The above analysis reveals also a certain Overlap Gap Property (OGP) on the space of \beta's which we conjecture to be the source of algorithmic hardness of the problem when n=o(k \log p). Time permitting, we are going to discuss about this property and its algorithmic interpretation.

 

Joint work with David Gamarnik.