Tilted Losses in Machine Learning: Theory and Applications

Wednesday, November 10, 2021 - 2:00pm to 3:00pm

Event Calendar Category

Uncategorized

Speaker Name

Virginia Smith

Affiliation

Carnegie Mellon University

Join Zoom meeting

https://mit.zoom.us/s/95607814981

Abstract

Exponential tilting is a technique commonly used to create parametric distribution shifts. Despite its prevalence in related fields, tilting has not seen widespread use in machine learning. In this talk, I discuss a simple extension to ERM---tilted empirical risk minimization (TERM)---which uses tilting to flexibly tune the impact of individual losses. I make connections between TERM and related approaches, such as Value-at-Risk, Conditional Value-at-Risk, and distributionally robust optimization (DRO), and present batch and stochastic first-order optimization methods for solving TERM at scale. Finally, I show that this baseline can be used for a multitude of applications in machine learning, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance---delivering state-of-the-art performance relative to more complex, bespoke solutions for these problems.

Biography

Virginia Smith is an assistant professor in the Machine Learning Department at Carnegie Mellon University. Her research spans machine learning, optimization, and distributed systems. Virginia’s current work addresses challenges related to optimization, privacy, fairness, and robustness in distributed settings in order to make federated learning safe, efficient, and reliable. Virginia’s work has been recognized by several awards, including an MIT TR35 Innovator Award, Facebook Faculty Award, and Google Research Awards. Prior to CMU, Virginia was a postdoc at Stanford University and received a Ph.D. in Computer Science from UC Berkeley.