LIDS Seminar Series

September 19, 2017

Networking for Big Data: Theory and Optimization for NDN

Edmund Yeh (Northeastern University)

The advent of Big Data is stimulating the development of new networking architectures which facilitate the acquisition, transmission, storage, and computation of data. In particular, Named Data Networking (NDN) is an emerging content-centric...

September 25, 2017

Expectation-Maximization, Power Iteration, and Non-convex Optimization in Learning and Statistics

Constantinos Daskalakis (MIT)

The Expectation-Maximization (EM) algorithm is a widely-used method for maximum likelihood estimation in models with latent variables. For estimating mixtures of Gaussians, its iteration can be viewed as a soft version of the k-means clustering...

October 12, 2017

Modeling and Learning Deep Representations, in Theory and in Practice

Stefano Soatto (University of California, Los Angeles and Amazon AI)

A few things about Deep Learning I find puzzling: 1) How can deep neural networks — optimized by stochastic gradient descent (SGD) agnostic of concepts of invariance, minimality, disentanglement — somehow manage to learn representations that exhibit...

October 17, 2017

The Maps Inside Your Head

Vijay Balasubramanian (University of Pennsylvania)

How do our brains make sense of a complex and unpredictable world? In this talk, I will discuss an information theory approach to the neural topography of information processing in the brain. First I will review the brain's architecture, and how...

October 24, 2017

Optimal and Adaptive Variable Selection

Alexandre Tsybakov (Center for Research in Economics and Statistics (CREST) - ENSAE)

We consider the problem of variable selection based on $n$ observations from a high-dimensional linear regression model. The unknown parameter of the model is assumed to belong to the class $S$ of all $s$-sparse vectors in $R^p$ whose non-zero...

October 31, 2017

Joint LIDS and TOC Seminar: Structure, Randomness and Universality

Noga Alon (Tel Aviv University and CMSA, Harvard University)

What is the minimum possible number of vertices of a graph that contains every k-vertex graph as an induced subgraph? What is the minimum possible number of edges in a graph that contains every k-vertex graph with maximum degree 3 as a subgraph?...

November 14, 2017

Quantum Limits on the Information Carried by Electromagnetic Radiation

Massimo Franceschetti (University of California, San Diego)

In many practical applications information is conveyed by means of electromagnetic radiation and a natural question concerns the fundamental limits of this process. Identifying information with entropy, one can ask about the maximum amount of...

November 21, 2017

The Sharing Economy for the Smart Grid

Kameshwar Poolla (University of California, Berkeley)

The sharing economy. It is all the rage. Going on vacation? Rent out your home for extra income! Have space in your car? Pick up passengers for extra income! Companies such as AirBnB, VRBO, Lyft, and Uber have disrupted housing and transportation...

November 28, 2017

Comparison Lemmas, Non-Smooth Convex Optimization and Structured Signal Recovery

Babak Hassibi (California Institute of Technology)

In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, finite constellation, etc.) from possibly noisy measurements in a variety applications in...

December 5, 2017

Regularized Nonlinear Acceleration

Alexandre d’Aspremont (École Normale Supérieure )

We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed...