Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity

Wednesday, March 30, 2022 - 2:00pm to 3:00pm

Event Calendar Category

Other LIDS Events

Speaker Name

Nicolas Flammarion

Affiliation

EPFL

Zoom meeting id

914 2958 7998

Join Zoom meeting

https://mit.zoom.us/j/91429587998

Abstract

Understanding the implicit bias of training algorithms is of crucial importance in order to explain the success of overparametrized neural networks. In this talk, we study the dynamics of stochastic gradient descent over diagonal linear networks through its continuous-time version, namely stochastic gradient flow. We explicitly characterize the solution chosen by the stochastic flow and prove that it always enjoys better generalization properties than that of gradient flow. Quite surprisingly, we show that the convergence speed of the training loss controls the magnitude of the biasing effect: the slower the convergence, the better the bias. Our findings highlight the fact that structured noise can induce better generalization and they help to explain the greater performances observed in practice of stochastic gradient descent over gradient descent.

Biography

Nicolas Flammarion is a tenure-track assistant professor in computer science at EPFL. Prior to that, he was a postdoctoral fellow at UC Berkeley, hosted by Michael I. Jordan. He received his Ph.D. in 2017 from Ecole Normale Superieure in Paris, where he was advised by Alexandre d’Aspremont and Francis Bach. His research focuses primarily on learning problems at the interface of machine learning, statistics, and optimization.