Fall 2017

September 19, 2017 to September 20, 2017

Networking for Big Data: Theory and Optimization for NDN

Speaker: Edmund Yeh (Northeastern University)

The advent of Big Data is stimulating the development of new networking architectures which facilitate the acquisition, transmission, storage, and computation of data. In particular, Named Data Networking (NDN) is an emerging content-centric...

September 25, 2017 to September 26, 2017

Expectation-Maximization, Power Iteration, and Non-convex Optimization in Learning and Statistics

Speaker: Constantinos Daskalakis (MIT)

The Expectation-Maximization (EM) algorithm is a widely-used method for maximum likelihood estimation in models with latent variables. For estimating mixtures of Gaussians, its iteration can be viewed as a soft version of the k-means...

October 12, 2017

Modeling and Learning Deep Representations, in Theory and in Practice

Speaker: Stefano Soatto (University of California, Los Angeles and Amazon AI)

A few things about Deep Learning I find puzzling: 1) How can deep neural networks — optimized by stochastic gradient descent (SGD) agnostic of concepts of invariance, minimality, disentanglement — somehow manage to learn representations that...

October 17, 2017 to October 18, 2017

The Maps Inside Your Head

Speaker: Vijay Balasubramanian (University of Pennsylvania)

How do our brains make sense of a complex and unpredictable world? In this talk, I will discuss an information theory approach to the neural topography of information processing in the brain. First I will review the brain's architecture, and...

October 24, 2017 to October 25, 2017

Optimal and Adaptive Variable Selection

Speaker: Alexandre Tsybakov (Center for Research in Economics and Statistics (CREST) - ENSAE)

We consider the problem of variable selection based on $n$ observations from a high-dimensional linear regression model. The unknown parameter of the model is assumed to belong to the class $S$ of all $s$-sparse vectors in $R^p$ whose non-...

October 31, 2017 to November 1, 2017

Structure, Randomness and Universality (Joint LIDS and TOC Seminar)

Speaker: Noga Alon (Tel Aviv University and CMSA, Harvard University)

What is the minimum possible number of vertices of a graph that contains every k-vertex graph as an induced subgraph? What is the minimum possible number of edges in a graph that contains every k-vertex graph with maximum degree 3 as a...

November 14, 2017 to November 15, 2017

Quantum Limits on the Information Carried by Electromagnetic Radiation

Speaker: Massimo Franceschetti (University of California, San Diego)

In many practical applications information is conveyed by means of electromagnetic radiation and a natural question concerns the fundamental limits of this process. Identifying information with entropy, one can ask about the maximum amount...

November 21, 2017 to November 22, 2017

The Sharing Economy for the Smart Grid

Speaker: Kameshwar Poolla (University of California, Berkeley)

The sharing economy. It is all the rage. Going on vacation? Rent out your home for extra income! Have space in your car? Pick up passengers for extra income! Companies such as AirBnB, VRBO, Lyft, and Uber have disrupted housing and...

November 28, 2017 to November 29, 2017

Comparison Lemmas, Non-Smooth Convex Optimization and Structured Signal Recovery

Speaker: Babak Hassibi (California Institute of Technology)

In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, finite constellation, etc.) from possibly noisy measurements in a variety applications in...

December 5, 2017 to December 6, 2017

Regularized Nonlinear Acceleration

Speaker: Alexandre d’Aspremont (École Normale Supérieure )

We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are...