Tractable Computation of Expected Kernels

An Example Kernel Circuit


Computing the expectation of kernel functions is a ubiquitous task in machine learning, with applications from classical support vector machines to exploiting kernel embeddings of distributions in probabilistic modeling, statistical inference, causal discovery, and deep learning. In all these scenarios, we tend to resort to Monte Carlo estimates as expectations of kernels are intractable in general. In this work, we characterize the conditions under which we can compute expected kernels exactly and efficiently, by leveraging recent advances in probabilistic circuit representations. We first construct a circuit representation for kernels and propose an approach to such tractable computation. We then demonstrate possible advancements for kernel embedding frameworks by exploiting tractable expected kernels to derive new algorithms for two challenging scenarios: 1) reasoning under missing data with kernel support vector regressors; 2) devising a collapsed black-box importance sampling scheme. Finally, we empirically evaluate both algorithms and show that they outperform standard baselines on a variety of datasets.

In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI 2021)
Zhe Zeng
Zhe Zeng
Ph.D. student in AI

My research interests lie in the intersection of machine learning (tractable probabilistic modeling, statistical relational learning, graphical models, Bayesian deep learning, kernel and non-parametric methods) and formal methods.