LIDS & Stats Tea

Tea talks are 20 minute long informal chalk-talks for the purpose of sharing ideas and making others aware about some of the topics that may be of interest to the LIDS and Stats audience. If you are interested in presenting in the upcoming seminars, please email lids_stats_tea[at]mit[dot]edu

September 9, 2020

Model-Based Reinforcement Learning for Countably Infinite State Space MDP

Bai Liu (LIDS)

With the rapid advance of information technology, network systems have become increasingly complex and hence the underlying system dynamics are typically unknown or difficult to characterize. Finding a good network control policy is of significant...

September 16, 2020

Solving the Phantom Inventory Problem: Near-optimal Entry-wise Anomaly Detection

Tianyi Peng (AeroAstro)

Tianyi will discuss the work about how to achieve the optimal detection rate for detecting anomalies in a low-rank matrix. The concrete application we are studying is a crucial inventory management problem ('phantom inventory') that by some measures...

September 23, 2020

Towards Data Auctions with Externalities

Maryann Rui (LIDS)

The design of data markets has gained in importance as firms increasingly use predictions from machine learning models to make their operations more effective, yet need to externally acquire the necessary training data to fit such models. This is...

September 30, 2020

Active Learning for Nonlinear System Identification with Guarantees

Horia Mania (LIDS)

While the identification of nonlinear dynamical systems is a fundamental building block of model-based reinforcement learning and feedback control, its sample complexity is only understood for systems that either have discrete states and actions or...

October 7, 2020

Fast Learning Guarantees for Weakly Supervised Learning

Joshua Robinson (CSAIL)

We study generalization properties of weakly supervised learning. That is, learning where only a few true labels are present for a task of interest but many more “weak” labels are available. In particular, we show that embeddings trained using weak...

October 14, 2020

Provably Faster Convergence of Adaptive Gradient Methods

Jingzhao Zhang (LIDS)

While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Adam have been observed to outperform SGD across important tasks, such as NLP models. The settings under which SGD performs poorly in...

October 21, 2020

Kernel Approximation Over Algebraic Varieties

Jason Altschuler (LIDS)

Low-rank approximation of the Gaussian kernel is a core component of many data-science algorithms. Often the approximation is desired over an algebraic variety (e.g., in problems involving sparse data or low-rank data). Can better approximations be...

October 28, 2020

Regulating Algorithmic Filtering on Social Media

Sarah Cen (LIDS)

Social media platforms moderate content using a process known as algorithmic filtering (AF). While AF has the potential to greatly improve the user experience, it has also drawn intense scrutiny for its roles in, for example, spreading fake news,...

November 4, 2020

Personalized Federated Learning: A Model-Agnostic Meta-Learning Approach

Alireza Fallah (LIDS)

In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users...

November 18, 2020

Sensor-based Control for Fast and Agile Aerial Robotics

Ezra Tal (LIDS)

In recent years, autonomous unmanned aerial vehicles (UAVs) that can execute aggressive (i.e., fast and agile) maneuvers have attracted significant attention. We focus on the design of control algorithms for accurate tracking of such maneuvers. This...

December 2, 2020

Train Simultaneously, Generalize Better: Stability of Gradient-Based Minimax Learners

Farzan Farnia (EECS)

The success of minimax learning problems of generative adversarial networks (GANs) and adversarial training has been observed to depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the...

December 9, 2020

LIDS & Stats Tea Talk - Xiang Cheng (LIDS)

Xiang Cheng (LIDS)