Control Meets Learning: Scalable and Reliable AI for Autonomy

Thursday, February 11, 2021 - 1:00pm to Friday, February 12, 2021 - 1:55pm

Event Calendar Category

EECS

Speaker Name

Guannan Qu

Affiliation

California Institute of Technology

Abstract

Artificial Intelligence (AI), particularly Reinforcement Learning (RL), has achieved great success in domains such as gameplay. However, RL has scalability and reliability issues which makes it challenging for RL to make an impact in safety-critical and large-scale systems such as power grids, transportation, smart city. In this talk, we show that integrating RL with model-structure and model-based control can address the scalability and reliability issues of RL. In the first part of the talk, we consider a networked multi-agent setting and we propose a Scalable Actor Critic framework that provably addresses the scalability issue of multi-agent RL. The key is to exploit a form of local interaction structure widely present in networked systems. In the second part, we consider a nonlinear control setting where the dynamics admit an approximate linear model, which is true for many systems such as the power grid. We show that exploiting the approximate linear model and model-based control can greatly improve the reliability of an important class of RL algorithms. 

 

Biography

Guannan Qu received his B.S. degree in Electrical Engineering from Tsinghua University in Beijing, China in 2014, and his Ph.D. in Applied Mathematics from Harvard University in Cambridge, MA in 2019. Since 2019 he has been a CMI and Resnick postdoctoral scholar in the Department of Computing and Mathematical Sciences at California Institute of Technology. He is the recipient of Caltech Simoudis Discovery Award, PIMCO Fellowship, Amazon AI4Science Fellowship, and IEEE SmartGridComm Best Student Paper Reward. His research interest lies in control, optimization, and machine/reinforcement learning with applications to power systems, multi-agent systems, smart city, etc.