A Simple and Optimal Policy Design with Safety against Heavy-tailed Risk for Stochastic Bandits

Wednesday, November 9, 2022 - 4:00pm to 4:30pm

Event Calendar Category

LIDS & Stats Tea

Speaker Name

Feng Zhu

Affiliation

IDSS & LIDS

Building and Room Number

LIDS Lounge

Abstract

We design new policies that ensure both worst-case optimality for expected regret and light-tailed risk for regret distribution in the stochastic multi-armed bandit problem. It is recently shown that information-theoretically optimized bandit algorithms as well as standard UCB policies suffer from some serious heavy-tailed risk. Inspired by such results, we further show that heavy-tailed risk actually exists for all "instance-dependent consistent" policies. In particular, any policy that incurs an instance-dependent O(ln T) expected regret must incur a linear regret with probability Ω(poly(1/T)). With the aim to ensure safety against such heavy-tailed risk, starting from the two-armed bandit setting, we provide a simple policy design that (i) has the worst-case optimality for the expected regret at order O(sqrt{T ln T}) and (ii) has the worst-case tail probability of incurring a linear regret decay at an optimal exponential rate exp(−Ω(sqrt{T})). Next, we improve the policy design and analysis to the general K-armed bandit setting. Specifically, the worst-case probability of incurring a regret larger than x is upper bounded by exp(−Ω(x/sqrt{KT})). We also enhance the policy design to accommodate the "any-time" setting where T is not known a priori. A brief account of numerical experiments is conducted to illustrate the theoretical findings. We conclude by extending our proposed policy design to the general stochastic linear bandit setting and obtain light-tailed regret bound. Our results reveal insights on the incompatibility between consistency and light-tailed risk, whereas indicate that worst-case optimality on expected regret and light-tailed risk on regret distribution are compatible.

Biography

Feng Zhu is a 3rd year PhD student at IDSS and LIDS, advised by Prof. David Simchi-Levi. His research interests lie in online decision-making, including online learning and online matching. Prior to MIT, he received a BS degree in Statistics from Peking Univeristy.