Wednesday, May 1, 2019 - 3:00pm to 3:30pm
Event Calendar Category
LIDS & Stats Tea
Building and Room Number
Statistical learning theory has largely focused on learning and generalization given independent and identically distributed (i.i.d.) samples. Motivated by applications involving time-series data, there has been a growing literature on learning and generalization in settings where data is sampled from an ergodic process. This work has also developed complexity measures, which appropriately extend Rademacher complexity to bound the generalization error and learning rates of hypothesis classes in this setting. Rather than time-series data, our work is motivated by settings where data is sampled on a network or a spatial domain, and thus do not fit well the framework of prior work. We provide learning and generalization bounds for data that are complexly dependent, yet their distribution satisfies the standard Dobrushin condition. Indeed, we show that the standard complexity measures of (Gaussian) Rademacher complexity and VC dimension are sufficient measures of complexity for the purposes of bounding the generalization error and learning rates of hypothesis classes in our setting. Moreover, our generalization bounds only degrade by constant factors compared to their i.i.d. analogs and our learnability bounds degrade by log factors in the size of the training set.
Joint work with Constantinos Daskalakis, Nishanth Dikkala, and Siddhartha Jayanti.
Yuval Dagan is a PhD student at the MIT Department of Electrical Engineering and Computer Science. His advisor is Konstantinos Daskalakis. He received his Bachelor's and Master's degrees from the Technion - Israel Institute of Technology under the supervision of Prof. Yuval Filmus. His research interests are theoretical computer science, more specifically learning theory and complexity. His recent works have focused on information complexity, Huffman codes, multi-armed bandits and streaming algorithms.