Building Blocks of Generalizable Autonomy

Monday, March 15, 2021 - 11:30am to 12:30pm

Event Calendar Category

CSAIL

Speaker Name

Animesh Garg

Affiliation

University of Toronto

Zoom meeting id

816897

Join Zoom meeting

https://mit.zoom.us/j/97693474025

Abstract

Animesh Garg's approach to Generalizable Autonomy posits that interactive learning across families of tasks is essential for discovering efficient representation and inference mechanisms. Arguably, a cognitive concept or a dexterous skill should be reusable across task instances to avoid constant relearning. It is insufficient to learn to “open a door”, and then have to re-learn it for a new door, or even windows & cupboards. Thus, he focuses on three key questions: (1) Representational biases for embodied reasoning, (2) Causal Inference in abstract sequential domains, and (3) Interactive Policy Learning under uncertainty. In this talk, Animesh will first through example lay bare the need for structured biases in modern RL algorithms in the context of robotics. This will span state, actions, learning mechanisms, and network architectures. Secondly, he will talk about the discovery of latent causal structures in dynamics for planning. Finally, he will demonstrate how large-scale data generation combined with insights from structure learning can enable sample efficient algorithms for practical systems. In this talk, he will focus mainly on manipulation, but his work has been applied to surgical robotics and legged locomotion as well.

 

Biography

Animesh Garg is a CIFAR Chair Assistant Professor of Computer Science at the University of Toronto and a Faculty Member at the Vector Institute where he leads the Toronto People, AI, and Robotics (PAIR) research group. Animesh is affiliated with Mechanical and Industrial Engineering (courtesy) and UofT Robotics Institute. Animesh also spends time as a Senior Researcher at Nvidia Research in ML for Robotics. Prior to this, Animesh earned a Ph.D. from UC Berkeley and was a postdoc at the Stanford AI Lab. His research focuses on machine learning algorithms for perception and control in robotics. His work aims to build Generalizable Autonomy in robotics which involves a confluence of representations and algorithms for reinforcement learning, control, and perception. His work has received multiple Best Paper Awards (ICRA, IROS, Hamlyn Symposium, Neurips Workshop, ICML Workshop) and has been covered in the press (New York Times, Nature, BBC).