Algorithmic Obstructions in the Random Number Partitioning Problem

Wednesday, May 12, 2021 - 4:00pm to 4:30pm

Event Calendar Category

LIDS & Stats Tea

Speaker Name

Eren Can Kizildag



Zoom meeting id

970 5689 8464

Join Zoom meeting


We consider the algorithmic problem of finding a near-optimal solution for the number partitioning problem (NPP). This problem appears in many practical applications, including the design of randomized controlled trials, multiprocessor scheduling, and cryptography; and is also of theoretical significance. The NPP possesses the so-called statistical-to-computational gap: when its input has distribution N(0,I_n), the optimal value of NPP is \Theta(2^{-n}) w.h.p.; whereas the best polynomial-time algorithm achieves the objective value of only 2^{-\Theta(\log^2 n)}, w.h.p.

In this presentation, we initiate the study of the nature of this gap. Inspired by insights from statistical physics, we study the landscape of the NPP and establish the presence of the Overlap Gap Property (OGP), an intricate geometric property that is known to be rigorous evidence of an algorithmic hardness for large classes of algorithms. By leveraging the OGP, we establish that (a) any sufficiently stable algorithm, appropriately defined, fails to find a near-optimal solution with energy below 2^{-\omega(n \log^{-1/5} n)}; and (b) a very natural Markov Chain Monte Carlo dynamics fails for find near-optimal solutions.

OGP regards the overlap structure of m-tuples of solutions achieving a certain objective value. When m is constant, we prove the presence of OGP for the objective values of order 2^{-\Theta(n)}, and the absence of it in the regime 2^{-o(n)}. Interestingly, though, by considering overlaps with growing values of m we prove the presence of the OGP up to the level 2^{-\omega(\sqrt{n\log n})}. Our proof of the failure of stable algorithms at values 2^{-\omega(n \log^{-1/5} n)} employs methods from Ramsey Theory from the extremal combinatorics and is of independent interest.


Eren is a PhD student in EECS department, advised by Prof. David Gamarnik. His current research interests spin around theory of machine learning and high-dimensional statistics where he is interested in devising computationally efficient algorithms for solving such problems and understanding their fundamental computational limits by leveraging insights from the statistical physics.