Near-Optimal Learning of Extensive-Form Games with Imperfect Information

Wednesday, February 9, 2022 - 4:00pm to 4:30pm

Event Calendar Category

LIDS & Stats Tea

Speaker Name

Tiancheng Yu



Building and Room Number

LIDS Lounge


His paper resolves a longstanding open question of designing near-optimal algorithms for learning imperfect-information extensive-form games from bandit feedback. We present the first line of algorithms that require only $\widetilde{\mathcal{O}}((XA+YB)/\epsilon^2)$ episodes of play to find an $\epsilon$-approximate Nash equilibrium in two-player zero-sum games, where $X,Y$ are the number of information sets and $A,B$ are the number of actions for the two players. This improves upon the best known sample complexity of $\widetilde{\mathcal{O}}((X^2A+Y^2B)/\varepsilon^2)$ by a factor of $\widetilde{\mathcal{O}}(\max\{X, Y\})$, and matches the information-theoretic lower bound up to logarithmic factors. We achieve this sample complexity by two new algorithms: Balanced Online Mirror Descent, and Balanced Counterfactual Regret Minimization. Both algorithms rely on novel approaches of integrating balanced exploration policies into their classical counterparts. We also extend our results to learning Coarse Correlated Equilibria in multi-player general-sum games.


Tiancheng Yu is a 4-th year Ph.D. student in EECS and LIDS, advised by Prof. Suvrit Sra. He is interested in sequential decision-making problems in general, with a focus on theory of reinforcement learning and learning in games.