Probabilistic graphical models compactly represent the structure of stochastic processes, and exploiting the sparsity of that structure often leads to computationally efficient inference algorithms, which process realized observations to update a belief about latent states that are not observable. Given that observations might be costly to acquire or process and are generally not equally informative about the underlying process, prioritizing available observations based on their informativeness can lead to more efficient utilization of resources while improving expected inferential accuracy. The ability to efficiently quantify the information content of observations is crucial for resource-constrained data acquisition (adaptive sampling) and data processing (active learning) systems. Given the combinatorial nature of the observation selection problem, the ability to bound the performance of suboptimal selection heuristics is also of interest for efficacious sensor management.
We have developed new exact inference methods that efficiently quantify the informativeness of observations in large (high-dimensional) probabilistic models with nuisances — variables that are not of any extrinsic importance, but act as intermediaries between those that are either observable or of inferential interest. For Gaussian trees, we have proved that local mutual information measures computed via message passing algorithms (such as belief propagation) can be warped and reused to compute nonlocal mutual information (i.e., between nonadjacent nodes in the graph). Our method has complexity linear in the number of nodes of the graph, in contrast to existing cubic method based on matrix inversion. We have recently extended our results by using iterative embedded structure algorithms to efficiently quantify observation informativeness in general undirected Gaussian graphs with cycles and nuisances. We have additionally derived online-computable performance lower bounds for greedy selection on graphs with nuisances — which is generally not submodular — based on the idea of submodular relaxations: synthetic alternate problems that satisfy simple Markov separation criteria on the graph and are computationally inexpensive to determine.