Wednesday, March 14, 2018 - 4:30pm to Thursday, March 15, 2018 - 4:55pm
Event Calendar Category
LIDS & Stats Tea
Building and Room Number
In this talk, we focus on the high dimensional linear regression problem where the goal is to efficiently recover an unknown vector β* from n noisy linear observations Y = Xβ* + W ∈ ℝn, for known X ∈ ℝn × p and unknown W ∈ ℝn. Unlike most of the literature on this model we make no sparsity assumption on β*. Instead we adopt a regularization based on assuming that the underlying vectors β* have rational entries with the same denominator Q ∈ ℤ>0. We call this Q-rationality assumption.
We propose a new polynomial-time algorithm for this task which is based on the seminal Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm. We establish that under the Q-rationality assumption, our algorithm recovers exactly the vector β* for a large class of distributions for the iid entries of X and non-zero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n = 1).
Joint work with David Gamarnik.