Wednesday, November 9, 2016 - 4:30pm
Event Calendar Category
LIDS & Stats Tea
Building and Room Number
A primary reason for the widespread use of graphical models is the existence of efficient algorithms for prediction from partial observations. But it's not clear how much data is required to learn a model so that such predictions are accurate. Until now, virtually all work on graphical model learning has focused on recovering the true underlying graph. In this talk, we introduce a new loss function which captures accuracy of subsequent predictions. We study the new loss in the special case of tree-structured Ising models and prove finite sample bounds. An implication is that the number of samples needed can be dramatically lower than is required to learn the exact underlying tree.
This is joint work with Professor Guy Bresler.