LIDS & Stats Tea

Tea talks are 20 minute long informal chalk-talks for the purpose of sharing ideas and making others aware about some of the topics that may be of interest to the LIDS and Stats audience. If you are interested in presenting in the upcoming seminars, please email lids_stats_tea[at]mit[dot]edu

February 10, 2021

Localization, Uniform Convexity, and Star Aggregation

Suhas Vijaykumar (MIT Sloan)

Offset Rademacher complexities have been shown to imply sharp, data-dependent upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that in the statistical setting, the...


February 17, 2021

Generative Adversarial Training for Gaussian Mixture Models

Farzan Farnia (LIDS)

Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image...


February 24, 2021

Exterior-Point Operator Splitting for Nonconvex Learning

Shuvomoy Das Gupta (ORC)

We present the nonconvex exterior-point operator splitting algorithm (NExOS), a novel linearly convergent first-order algorithm tailored for constrained nonconvex learning problems. We consider the problem of minimizing a convex cost function over a...


March 3, 2021

The DeCAMFounder: Non-Linear Causal Discovery in the Presence of Hidden Variables

Raj Agrawal (CSAIL)

Many real-world decision-making tasks require learning causal relationships between a set of variables. Typical causal discovery methods, however, require that all variables are observed, which might not be realistic in practice. Unfortunately, in...


March 10, 2021

Misinformation: Strategic Sharing, Homophily, and Endogenous Echo Chambers

James Siderius (EECS)

We present a model of online content sharing where agents sequentially observe an article and must decide whether to share it with others. The article may contain misinformation, but at a cost, agents can “fact-check” to determine whether its...


March 17, 2021

Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank Constraints

Ryan Cory-Wright (ORC)

We propose a framework for modeling and solving low-rank optimization problems to certifiable optimality. We introduce symmetric projection matrices that satisfy Y^2=Y, the matrix analog of binary variables that satisfy z^2=z, to model rank...


March 24, 2021

A Fast Certifier for Large-Scale Degenerate SDPs in Outlier-Robust Machine Perception

Heng Yang (LIDS)

The Moment/Sums-of-Squares hierarchy of semidefinite relaxations at order two has been shown as a powerful tool for solving non-convex and combinatorial outlier-robust machine perception problems to certifiable global optimality. Unfortunately, the...


March 31, 2021

Prior-Free Double Auctions in Incentive Design for Federated Learning

Andreas Alexander Haupt (LIDS & IDSS)

Federated learning (FL) is a paradigm that allows distributed clients to learn a shared machine learning model without sharing their sensitive training data. While FL is successfully applied in environments where a central orchestrator has the...


April 7, 2021

Fast Algorithms for Min-Mean-Cycle via Connections to Optimal Transport

Jason Altschuler (EECS / LIDS)

Min-Mean-Cycle is the problem of finding a cycle in a weighted directed graph with minimum mean weight. Classical applications in operations research, max-plus algebra, and computer science have motivated a long line of work on solving Min-Mean-...


April 14, 2021

Fluid Democracy: An Alternative Approach to Decision-Making in Democracy

Manon Revel (IDSS / LIDS)

Fluid democracy is a decision-making process that was proposed in the 60s as an alternative to direct democracy and representative democracy. It relies on letting agents choose between voting themselves and transitively delegating their votes to...


April 21, 2021

Age Debt: A General Framework For Minimizing Age of Information

Vishrant Tripathi (EECS / LIDS)

We consider the problem of minimizing age of information in general single-hop and multihop wireless networks. First, we formulate a way to convert AoI optimization problems into equivalent network stability problems. Then, we propose a heuristic...


April 28, 2021

Provably More Efficient Q-Learning in the One-Sided-Feedback/Full-Feedback Settings

Xiao-Yue Gong (ORC)

Motivated by the classical inventory control problem, we propose a new Q-learning-based algorithm called Elimination-Based Half-Q-Learning (HQL) that enjoys improved efficiency over existing algorithms for a wide variety of problems in the one-sided...


May 5, 2021

Matching Markets with Bandit Learners: Regret, Stability and Fairness

Sarah Cen (LIDS)

We consider two-sided matching markets (e.g., Uber, Thumbtack, or Tinder) in which agents must learn their preferences. In this setting, there are agents on opposite sides of the market that seek to be matched. However, contrary to the core...


May 12, 2021

Algorithmic Obstructions in the Random Number Partitioning Problem

Eren Can Kizildag (LIDS)

We consider the algorithmic problem of finding a near-optimal solution for the number partitioning problem (NPP). This problem appears in many practical applications, including the design of randomized controlled trials, multiprocessor scheduling,...


May 19, 2021

LIDS & Stats Tea Talk - Ruihao Zhu (IDSS/LIDS)

Ruihao Zhu ( IDSS / LIDS)