Seminar: Open Problems and Recent Advances in Neural Network Theory: Representation, Generalization, and Optimization

Monday, November 20, 2017 - 3:00pm to Tuesday, November 21, 2017 - 2:55pm

Event Calendar Category

Uncategorized

Speaker Name

Matus Telgarsky

Affiliation

University of Illinois at Urbana–Champaign

Building and Room number

34-401A (Grier Room A)

Abstract

This talk will survey open problems and recent advances in the following topics within neural network theory. On the representation side, Prof. Matus Telgarsky will explain how neural networks benefit from depth, and moreover can use it to efficiently approximate polynomials and rational functions. He will then discuss how even these many-layered networks can generalize; in particular, the gap between training and testing errors scales with with real-valued quantities such as weight matrix spectral norms, which in turn can be empirically verified to scale with the complexity of the learning task. Lastly, he will discuss optimization -- unfortunately, this final bit will be a litany of open problems.

This talk will feature joint work with Peter Bartlett and Dylan Foster. Prof. Telgarsky eagerly welcomes any and all audience participation.

Biography

Matus Telgarsky is an assistant professor at the University of Illinois at Urbana–Champaign, working in machine learning theory, with a recent focus on neural network theory. He received his PhD in 2013 at the University of California, San Diego under Sanjoy Dasgupta. During Spring 2017 he was a research fellow at the Simons Institute in Berkeley.