LIDS Seminar Series

September 19, 2017

Networking for Big Data: Theory and Optimization for NDN

Edmund Yeh (Northeastern University)

The advent of Big Data is stimulating the development of new networking architectures which facilitate the acquisition, transmission, storage, and computation of data. In particular, Named Data Networking (NDN) is an emerging content-centric...

September 25, 2017

Expectation-Maximization, Power Iteration, and Non-convex Optimization in Learning and Statistics

Constantinos Daskalakis (MIT)

The Expectation-Maximization (EM) algorithm is a widely-used method for maximum likelihood estimation in models with latent variables. For estimating mixtures of Gaussians, its iteration can be viewed as a soft version of the k-means clustering...

October 12, 2017

Modeling and Learning Deep Representations, in Theory and in Practice

Stefano Soatto (University of California, Los Angeles)

A few things about Deep Learning I find puzzling: 1) How can deep neural networks — optimized by stochastic gradient descent (SGD) agnostic of concepts of invariance, minimality, disentanglement — somehow manage to learn representations that exhibit...

October 17, 2017

The Maps Inside Your Head

Vijay Balasubramanian (University of Pennsylvania)

How do our brains make sense of a complex and unpredictable world? In this talk, I will discuss an information theory approach to the neural topography of information processing in the brain. First I will review the brain's architecture, and how...

November 14, 2017

Quantum Limits on the Information Carried by Electromagnetic Radiation

Massimo Franceschetti (University of California, San Diego)

In many practical applications information is conveyed by means of electromagnetic radiation and a natural question concerns the fundamental limits of this process. Identifying information with entropy, one can ask about the maximum amount of...

December 5, 2017

Regularized Nonlinear Acceleration

Alexandre d’Aspremont (École Normale Supérieure )

We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed...