Learning and Probabilistic Inference under constraints in AI/ML

Abstract

Performing learning and probabilistic inference under constraints is crucial in machine learning and its applications. Such constraints can be on model architectures, latent embeddings, or supervisions. However, it is challenging to build models such that the satisfaction of constraints is guaranteed and it can be computationally expensive to incorporate constraints. In this talk, I will introduce my recent research on incorporating three fundamental constraints: 1) k-subset constraints, with their application in regularization and explainability, 2) count constraints, with their application in weakly supervised learning, and 3) logical combinations of linear arithmetic constraints, with its application in Bayesian deep learning. Finally, I will share my vision for advancing constrained learning and inference for other fields.

Date
Jul 21, 2023
Event
Amazon Science
Location
Online
Zhe Zeng
Zhe Zeng
Ph.D. student in AI

My research goal is to enable machine learning models to incorporate diverse forms of constraints into probabilistic inference and learning in a principled way, by combining machine learning (probabilistic modeling, neuro-symbolic AI, Bayesian deep learning) and formal methods.