Share this page:

A Pseudo-Semantic Loss for Deep Generative Models with Logical Constraints

Kareem Ahmed, Kai-Wei Chang, and Guy Van den Broeck, in NeurIPS, 2023.

Download the full text


Abstract

Neuro-symbolic approaches bridge the gap between purely symbolic and neural approaches to learning. This often requires maximizing the probability of a symbolic constraint in the neural network’s output. However, output distributions are typically assumed to be fully-factorized, which prohibits the application of neurosymbolic learning to more expressive output distributions, such as autoregressive deep generative models. There, such probability computation is #P-hard, even for simple constraints. Instead, we propose to locally approximate the probability of the symbolic constraint under the pseudolikelihood distribution – the product of its full conditionals given a sample from the model. This allows our pseudo-semantic loss function to enforce the symbolic constraint. Our method bears relationship to several classical approximation schemes, including hogwild Gibbs sampling, consistent pseudolikelihood learning, and contrastive divergence. We test our proposed approach on three distinct settings: Sudoku, shortest-path prediction, and detoxifying large language models. Experiments show that pseudo-semantic loss greatly improves upon the base model’s ability to satisfy the desired logical constraint in its output distribution.


Bib Entry

@inproceedings{ahmed2023neuro,
  title = {	A Pseudo-Semantic Loss for Deep Generative Models with Logical Constraints},
  author = {Ahmed, Kareem and Chang, Kai-Wei and den Broeck, Guy Van},
  booktitle = {NeurIPS},
  year = {2023}
}

Related Publications