Complexity of Distributed and Federated Stochastic Optimization

Wednesday, February 10, 2021 - 11:00am to Thursday, February 11, 2021 - 11:55am

Event Calendar Category

Other LIDS Events

Speaker Name

Nati Srebro

Affiliation

Toyota Technological Institute at Chicago

Abstract

Stochastic convex optimization in a standard, sequential, setting, has been very well understood for a while now: we know how to formalize the problem, what the optimal (min-max) complexity is, and we have methods, namely accelerated stochastic gradient descent and its variants, which attain this complexity, and are, therefore “optimal”.  However, understanding the complexity of stochastic optimization, or equivalently learning, in a distributed setting, remains open.  In this talk, Prof. Srebro will discuss different distributed and federated optimization settings, where we stand in understanding their complexity, and what methods might be optimal.  In particular, I will present new results tightly characterizing the min-max complexity, and identifying an optimal method, for the most basic “intermittent communication” model.
Based on joint work with Blake Woodworth and many others.
 

Biography

Nati (Nathan) Srebro is a professor at the Toyota Technological Institute at Chicago, with cross-appointments at the University of Chicago Dept. of Computer Science and Committee on Computational and Applied Mathematics.  He obtained his Ph.D. at the Massachusetts Institute of Technology (MIT) in 2004 and previously was a post-doctoral fellow at the University of Toronto, a Visiting Scientist at IBM, and an Associate Professor at the Technion.  Prof. Srebro’s research encompasses methodological, statistical, and computational aspects of Machine Learning, as well as related problems in Optimization. Some of Prof. Srebro’s significant contributions include work on learning “wider” Markov networks; introducing the use of the nuclear norm for machine learning and matrix reconstruction; work on fast optimization techniques for machine learning, the optimality of stochastic methods, and on the relationship between learning and optimization more broadly.  His current interests include understanding deep learning through a detailed understanding of optimization; distributed and federated learning; algorithmic fairness and practical adaptive data analysis.  Prof. Srebro is currently on sabbatical visiting EPFL in Lausanne.