Friday, October 15, 2021 - 11:00am to 12:00pm
Event Calendar Category
University of Pennsylvania
Building and Room number
Reinforcement learning (RL), which is frequently modeled as sequential learning and decision making in the face of uncertainty, is garnering growing interest in recent years due to its remarkable success in practice. In contemporary RL applications, it is increasingly more common to encounter environments with prohibitively large state and action space, thus imposing stringent requirements on the sample efficiency of the RL algorithms in use. Despite the empirical success, however, the theoretical underpinnings for many popular RL algorithms remain highly inadequate even for the tabular setting.
In this talk, we present two vignettes regarding the sample efficiency of RL algorithms. The first vignette demonstrates that a perturbed model-based RL approach is minimax optimal under a generative model, without suffering from a sample size barrier that was present in all past work. In the second vignette, we pin down the sample complexity of Q-learning on Markovian samples, which substantially improves upon prior results by a factor at least as large as the dimension of the state-action space. These results cover two distinctive RL paradigms and might shed light on the efficacy of these algorithms in more complicated scenarios.
Yuting Wei is currently an assistant professor in the Statistics and Data Science Department at the Wharton School, University of Pennsylvania. Prior to that, she spent two years at Carnegie Mellon University as an assistant professor of statistics, and one year at Stanford University as a Stein Fellow. She received her Ph.D. in statistics at University of California, Berkeley, working with Martin Wainwright and Aditya Guntuboyina. She was the recipient of the 2018 Erich L. Lehmann Citation from the Berkeley statistics department for her Ph.D. dissertation in theoretical statistics. Her research interests include high-dimensional and non-parametric statistics, statistical machine learning, and reinforcement learning.