Submodular Optimization: From Discrete to Continuous and Back

Tuesday, February 20, 2018 - 3:00pm to Wednesday, February 21, 2018 - 3:55pm

Event Calendar Category

LIDS Seminar Series

Speaker Name

Amin Karbasi

Affiliation

Yale University

Building and Room number

34-101

Many procedures in statistics and artificial intelligence require solving non-convex problems. Historically, the focus has been to convexify the non-convex objectives. In recent years, however, there has been significant progress to optimize non-convex functions directly. This direct approach has led to provably good guarantees for specific problem instances such as latent variable models, non-negative matrix factorization, robust PCA, matrix completion, etc. Unfortunately, there is no free lunch and it is well known that in general finding the global optimum of a non-convex optimization problem is NP-hard. This computational barrier has mainly shifted the goal of non-convex optimization towards two directions: a) finding an approximate local minimum by avoiding saddle points or b) characterizing general conditions under which the underlying non-convex optimization is tractable.

In this talk, I will consider a broad class of non-convex optimization problems that possess special combinatorial structures. More specifically, I will focus on maximization of stochastic continuous submodular functions that demonstrate diminishing returns. Despite the apparent lack of convexity in such functions, we will see that first order methods can indeed provide strong approximation guarantees. In particular, for monotone and continuous submodular functions, we will show that projected stochastic gradient methods achieve a ½ approximation ratio. We then see how we can reach the tight (1-1/e) approximation guarantee by developing a new class of stochastic projection-free gradient methods. A simple variant of these algorithms also achieves a (1/e) approximation ratio in the non-monotone case. Finally, by using stochastic continuous optimization as an interface, we will also provide tight approximation guarantees for maximizing a (monotone or non-monotone) stochastic submodular set function subject to a general matroid constraint.

In this talk, I will not assume any particular background on submodularity or optimization and will try to motivate and define all the necessary concepts.

Amin Karbasi is an assistant professor in the School of Engineering and Applied Science (SEAS) at Yale University, where he leads the Inference, Information, and Decision (I.I.D.) Systems Group. Prior to that he was a post-doctoral scholar at ETH Zurich, Switzerland (2013-2014). He obtained his Ph.D. (2012) and M.Sc. (2007) in computer and communication sciences from EPFL, Switzerland and his B.Sc. (2004) in electrical engineering from the same university.