Wednesday, November 4, 2020 - 4:00pm to 4:30pm
Event Calendar Category
LIDS & Stats Tea
Zoom meeting id
965 7634 1426
Join Zoom meeting
In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained over a larger set of data points. However, this scheme only develops a common output for all the users, and, therefore, it does not adopt the model to each user. This is an important missing feature, especially given the heterogeneity of the underlying data distribution for various users. In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data. This approach keeps all the benefits of the federated learning architecture, and, by structure, leads to a more personalized model for each user. We show this problem can be studied within the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection, we study a personalized variant of the well-known Federated Averaging algorithm and evaluate its performance in terms of gradient norm for non-convex loss functions.
Based on joint work with Aryan Mokhtari (UT Austin) and Asu Ozdaglar (MIT), appearing in NeurIPS 2020.
Alireza Fallah is a Ph.D. candidate in the EECS Department at MIT, advised by Professor Asu Ozdaglar. His research interest is in optimization, theory of machine-learning, Game Theory, and statistical inference. Before coming to MIT, he earned a dual B.Sc. degree in Electrical Engineering and Mathematics from the Sharif University of Technology.