Lifted Probabilistic Inference in Relational Models

Info

Slides

Download the Complete Tutorial as a single file [pdf],
or select individual parts:
Part 1
Motivation [pdf]
Part 2
Probabilistic Databases [pdf]
Part 3
Weighted Model Counting [pdf]
Part 4
Lifted Inference for WFOMC [pdf]
Part 5
Completeness [pdf]
Part 6
Query Compilation [pdf]
Part 7
Symmetric Complexity [pdf]
Part 8
Open-World Probabilistic Databases [pdf]
Part 9
Conclusions and References [pdf]

Description

The tutorial will cover probabilistic inference in statistical relational models (SRMs) and probabilistic databases (PDBs). Both are popular relational representations of uncertainty. Both fields have realized that efficient inference is an enormous challenge, but also a immense opportunity for a new kind of probabilistic reasoning. This is referred to in the AI literature as lifted inference, and in the PDB literature as extensional query evaluation. The tutorial will focus on the big ideas that set probabilistic inference with relations apart from more classical inference problems.

There are several recent breakthrough results that for the first time allow us to talk about SRM and PDB inference in a single coherent framework. Within AI, we have

Within PDBs, we have Moreover, these two fields have very recently started connecting through the common language of relational logic. We now understand the commonalities and differences between SRMs and PDBs. Typical inference tasks are rather different in nature, yet can be captured in the same weighted model counting framework. Theoretical complexity bounds from one field translate to the other.

The focus of this tutorial is not on a specific lifted inference algorithm - many have been proposed. Instead, we explain why probabilistic inference with relations is different, and should be of interest to many IJCAI attendees. We focus on the fundamental ideas and insights that show why lifted inference algorithms work, which properties they exploit, how they can reduce complexity, but also why inference is still hard in the worst case. A second goal of the tutorial is to explain the connections between the concepts used in probabilistic AI and those used in probabilistic databases. As an application of these general ideas, we also briefly discuss approximate lifted inference, and lifted machine learning algorithms.