October 26, 2022
Speaker: Xinyi Wu (IDSS & LIDS)
A central challenge of building more powerful Graph Neural Networks (GNNs) is the oversmoothing phenomenon, where increasing the network depth leads to homogeneous node representations and thus worse classification performance. While...
November 2, 2022
Speaker: Abhin Swapnil Shah (LIDS)
We are interested in the problem of unit-level counterfactual inference with unobserved confounders owing to the increasing importance of personalized decision-making in many domains: consider a recommender system interacting with a user...
November 9, 2022
Speaker: Feng Zhu (IDSS & LIDS)
We design new policies that ensure both worst-case optimality for expected regret and light-tailed risk for regret distribution in the stochastic multi-armed bandit problem. It is recently shown that information-theoretically optimized...
November 16, 2022
Speaker: Xiyu Zhai (LIDS)
We invent a new programming language called Husky (https://github.com/xiyuzhai-husky-lang/husky) for efficient and strongly interpretable AI that can be fundamentally different from deep learning and traditional models. It’s a long term...
November 30, 2022
Speaker: Moïse Blanchard (ORC & LIDS)
We study the question of learnability for contextual bandits when the reward function class is unrestricted and provide consistent algorithms for large families of data-generating processes. Our analysis shows that achieving consistency...
December 7, 2022
Speaker: Yi Tian (LIDS)
We study the task of learning state representations from potentially high-dimensional observations, with the goal of controlling an unknown partially observable system. We pursue a direct latent model learning approach, where a dynamic model...
December 14, 2022
Speaker: Prem Murali Talwai (LIDS & ORC)
We derive minimax adaptive rates for a new, broad class of Tikhonov-regularized learning problems in Hilbert scales under general source conditions. Our analysis does not require the regression function to be contained in the hypothesis...