Regularized Nonlinear Acceleration

Tuesday, December 5, 2017 - 4:00pm

Event Calendar Category

LIDS Seminar Series

Speaker Name

Alexandre d’Aspremont


École Normale Supérieure

Building and Room Number



We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the base algorithm, providing improved estimates of the solution on the fly, while the original optimization method is running. Numerical experiments are detailed on classical classification problems.


After dual PhDs from Ecole Polytechnique and Stanford University in optimisation and finance, followed by a postdoc at U.C. Berkeley, Alexandre d'Aspremont joined the faculty at Princeton University as an assistant then associate professor with joint appointments at the ORFE department and the Bendheim Center for Finance. He returned to Europe in 2011 thanks to a grant from the European Research Council and is now a research director at CNRS, attached to Ecole Normale Supérieure in Paris. His research focuses on convex optimization and applications to machine learning, statistics and finance.