LIDS Seminar: Tara Javidi

Wednesday, April 22, 2026 - 4:00pm

Event Calendar Category

LIDS Seminar Series

Speaker Name

Tara Javidi

Affiliation

UCSD

Building and Room number

32-155

"Consequences of Kernel Regularity for Bandit Optimization"

In this talk, we investigate the relationship between kernel regularity and algorithmic performance in the bandit optimization of RKHS functions. Consider a black-box function to be optimized under the assumption of bounded norm in the RKHS associated with a given kernel K. This problem is known to have an agnostic Gaussian Process (GP) bandit interpretation in which an appropriately constructed GP surrogate model with kernel K is used to obtain an upper confidence bound (UCB) algorithm which rely on global kernel regressors. Under smoothness conditions, in contrast, it is also common to exploit local approximations.  Our earlier work has shown the advantages of combining these two approaches both in Bayesian and Kernelized setting; in particular, we proposed LP-GP-UCB, a multi-scale variant of UCB algorithm that utilizes smoothness locally to augment the GP-based global regressor and obtain strong theoretical guarantees. In this talk, we show that these perspective are deeply connected through the spectral properties of isotropic kernels.

In particular, we characterize the Fourier spectra of the Matern, square-exponential, rational-quadratic, \gamma-exponential, piecewise-polynomial, and Dirichlet kernels, and show that the decay rate determines asymptotic regret from both viewpoints. For kernelized bandit algorithms, spectral decay yields upper bounds on the maximum information gain, governing worst-case regret, while for smoothness-based methods, the same decay rates establish Holder space embeddings and Besov space normequivalences, enabling local continuity analysis. These connections show that kernel-based and locally adaptive algorithms can be analyzed within a unified framework. This allows us to derive explicit regret bounds for each kernel family, obtaining novel results in several cases and providing improved analysis for others. Furthermore, we revisit and analyze LP-GP-UCB which is shown to achieve order-optimality across multiple kernel families. I finish the talk with some open problems and conjectures.

This is joint work with Shekhar Shubhanshu at the University of Michigan and Madison Lee at UCSD.

Tara Javidi, IEEE Fellow, is the Jerzy Lewak Chair and Professor at UCSD in Jacobs School of Engineering and Halicioglu School of Computing, Information, and Data Science. Her areas of research is in active machine learning, feedback and network information theory, and stochastic control and optimization with applications in the design of wireless and multi-agent networks. She served as the Editor-in-Chief of the of IEEE JSAIT and an elected member of the ITSOC Board of Governors.