Fall 2022

October 26, 2022

A non-asymptotic analysis of oversmoothing in Graph Neural Networks

Speaker: Xinyi Wu (IDSS & LIDS)

A central challenge of building more powerful Graph Neural Networks (GNNs) is the oversmoothing phenomenon, where increasing the network depth leads to homogeneous node representations and thus worse classification performance. While...

November 2, 2022

On counterfactual inference with unobserved confounding via exponential family

Speaker: Abhin Swapnil Shah (LIDS)

We are interested in the problem of unit-level counterfactual inference with unobserved confounders owing to the increasing importance of personalized decision-making in many domains: consider a recommender system interacting with a user...

November 9, 2022

A Simple and Optimal Policy Design with Safety against Heavy-tailed Risk for Stochastic Bandits

Speaker: Feng Zhu (IDSS & LIDS)

We design new policies that ensure both worst-case optimality for expected regret and light-tailed risk for regret distribution in the stochastic multi-armed bandit problem. It is recently shown that information-theoretically optimized...

November 16, 2022

The Husky Programming Language for Efficient and Strongly Interpretable AI

Speaker: Xiyu Zhai (LIDS)

We invent a new programming language called Husky (https://github.com/xiyuzhai-husky-lang/husky) for efficient and strongly interpretable AI that can be fundamentally different from deep learning and traditional models. It’s a long term...

November 30, 2022

Contextual Bandits and Optimistically Universal Learning

Speaker: Moïse Blanchard (ORC & LIDS)

We study the question of learnability for contextual bandits when the reward function class is unrestricted and provide consistent algorithms for large families of data-generating processes. Our analysis shows that achieving consistency...

December 7, 2022

Can Direct Latent Model Learning Solve Linear Quadratic Gaussian Control?

Speaker: Yi Tian (LIDS)

We study the task of learning state representations from potentially high-dimensional observations, with the goal of controlling an unknown partially observable system. We pursue a direct latent model learning approach, where a dynamic model...

December 14, 2022

Optimal Learning Rates for Regularized Least-Squares with a Fourier Capacity Condition

Speaker: Prem Murali Talwai (LIDS & ORC)

We derive minimax adaptive rates for a new, broad class of Tikhonov-regularized learning problems in Hilbert scales under general source conditions. Our analysis does not require the regression function to be contained in the hypothesis...