Spring 2021

February 10, 2021

Localization, Uniform Convexity, and Star Aggregation

Speaker: Suhas Vijaykumar (MIT Sloan)

Offset Rademacher complexities have been shown to imply sharp, data-dependent upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that in the statistical setting...

February 17, 2021

Generative Adversarial Training for Gaussian Mixture Models

Speaker: Farzan Farnia (LIDS)

Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of...

February 24, 2021

Exterior-Point Operator Splitting for Nonconvex Learning

Speaker: Shuvomoy Das Gupta (ORC)

We present the nonconvex exterior-point operator splitting algorithm (NExOS), a novel linearly convergent first-order algorithm tailored for constrained nonconvex learning problems. We consider the problem of minimizing a convex cost...

March 3, 2021

The DeCAMFounder: Non-Linear Causal Discovery in the Presence of Hidden Variables

Speaker: Raj Agrawal (CSAIL)

Many real-world decision-making tasks require learning causal relationships between a set of variables. Typical causal discovery methods, however, require that all variables are observed, which might not be realistic in practice....

March 10, 2021

Misinformation: Strategic Sharing, Homophily, and Endogenous Echo Chambers

Speaker: James Siderius (EECS)

We present a model of online content sharing where agents sequentially observe an article and must decide whether to share it with others. The article may contain misinformation, but at a cost, agents can “fact-check” to determine whether...

March 17, 2021

Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank Constraints

Speaker: Ryan Cory-Wright (ORC)

We propose a framework for modeling and solving low-rank optimization problems to certifiable optimality. We introduce symmetric projection matrices that satisfy Y^2=Y, the matrix analog of binary variables that satisfy z^2=z, to model rank...

March 24, 2021

A Fast Certifier for Large-Scale Degenerate SDPs in Outlier-Robust Machine Perception

Speaker: Heng Yang (LIDS)

The Moment/Sums-of-Squares hierarchy of semidefinite relaxations at order two has been shown as a powerful tool for solving non-convex and combinatorial outlier-robust machine perception problems to certifiable global optimality....

March 31, 2021

Prior-Free Double Auctions in Incentive Design for Federated Learning

Speaker: Andreas Alexander Haupt (LIDS & IDSS)

Federated learning (FL) is a paradigm that allows distributed clients to learn a shared machine learning model without sharing their sensitive training data. While FL is successfully applied in environments where a central orchestrator has...

April 7, 2021

Fast Algorithms for Min-Mean-Cycle via Connections to Optimal Transport

Speaker: Jason Altschuler (EECS / LIDS)

Min-Mean-Cycle is the problem of finding a cycle in a weighted directed graph with minimum mean weight. Classical applications in operations research, max-plus algebra, and computer science have motivated a long line of work on solving Min-...

April 14, 2021

Fluid Democracy: An Alternative Approach to Decision-Making in Democracy

Speaker: Manon Revel (IDSS / LIDS)

Fluid democracy is a decision-making process that was proposed in the 60s as an alternative to direct democracy and representative democracy. It relies on letting agents choose between voting themselves and transitively delegating their...

April 21, 2021

Age Debt: A General Framework For Minimizing Age of Information

Speaker: Vishrant Tripathi (EECS / LIDS)

We consider the problem of minimizing age of information in general single-hop and multihop wireless networks. First, we formulate a way to convert AoI optimization problems into equivalent network stability problems. Then, we propose a...

April 28, 2021

Provably More Efficient Q-Learning in the One-Sided-Feedback/Full-Feedback Settings

Speaker: Xiao-Yue Gong (ORC)

Motivated by the classical inventory control problem, we propose a new Q-learning-based algorithm called Elimination-Based Half-Q-Learning (HQL) that enjoys improved efficiency over existing algorithms for a wide variety of problems in the...

May 5, 2021

Matching Markets with Bandit Learners: Regret, Stability and Fairness

Speaker: Sarah Cen (LIDS)

We consider two-sided matching markets (e.g., Uber, Thumbtack, or Tinder) in which agents must learn their preferences. In this setting, there are agents on opposite sides of the market that seek to be matched. However, contrary to the core...

May 12, 2021

Algorithmic Obstructions in the Random Number Partitioning Problem

Speaker: Eren Can Kizildag (LIDS)

We consider the algorithmic problem of finding a near-optimal solution for the number partitioning problem (NPP). This problem appears in many practical applications, including the design of randomized controlled trials, multiprocessor...

May 19, 2021

Sequential Decision-Makings in Non-Stationary Environments

Speaker: Ruihao Zhu ( IDSS / LIDS)

Motivated by non-stationary environments in online advertising, dynamic pricing, and inventory control, we introduce data-driven decision-making algorithms that achieve state-of-the-art regret bounds for bandit optimization and reinforcement...