Probabilistic Programming and Relational Learning

CS 267A - Fall 2018

Course Description

This course introduces computational models of probability and statistical models of relational data. It studies relational representations such as probabilistic databases, relational graphical models, and Markov logic networks, as well as various probabilistic programming languages. It covers their syntax and semantics, probabilistic inference problems, parameter, and structure learning algorithms, and theoretical properties of representation and inference. This course teaches expressive statistical modeling, how to formalize and reason about complex statistical assumptions and encode knowledge in machine learning models.  It also surveys key applicatons of relational learning.


This course requires basic computer science knowledge (logic, probability, programming, complexity).

Class Attendance

Class attendance is not required. However, if you miss class, it is your responsibility to retrieve relevant information, material, announcements, etc., from friends and classmates. While slides and suggested readings will be provided, some information may only be communicated during the lectures.


Grading will be based on Homework (30%), Midterm (30%) and a Project (40%). Midterm will be closed book. There is no final exam.


Regular homeworks will be announced on CCLE with a one-week deadline. There is no late policy: late submissions will not be graded. However, at the end of the quarter there will be an optional homework that can replace one bad grade on a previous homework. Homeworks need to be submitted in PDF form, typeset in LaTeX, and occasionally with source code in a zip file. All homeworks are subject to the honor code below.


You will have two options for the final project:
  1. A self-selected open-ended project related to the course content. This option is for motivated students that want to apply some of the course topics to their research.
  2. A guided project with a well-defined rubric.
Detailed instructions for both of these project options will be provided on CCLE. All projects are subject to the honor code below.


We will refer to selected readings for more details on the material taught it class. The following books and publications are freely available.

  1. David Poole and Alan Mackworth. Artificial Intelligence: Foundations of Computational Agents
  2. Guy Van den Broeck and Dan Suciu. Query Processing on Probabilistic Data: A Survey
  3. Pedro Domingos and Daniel Lowd. Markov Logic: An Interface Layer for Artificial Intelligence
  4. Luc De Raedt, Kristian Kersting, Sriraam Natarajan, David Poole. Statistical Relational Artificial Intelligence: Logic, Probability, and Computation
  5. N. D. Goodman and A. Stuhlmueller. The Design and Implementation of Probabilistic Programming Languages.
  6. Various research papers referred to in the slides.
Readings C and D are only free to download from the UCLA network.

Optionally, students may also want to consult the following material that is not freely available.

  1. Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. (3rd Edition)
  2. Adnan Darwiche. Modeling and Reasoning with Bayesian Networks
  3. Lise Getoor and Ben Taskar. Introduction to Statistical Relational Learning
  4. Fabrizio Riguzzi. Foundations of Probabilistic Logic Programming: Languages, Semantics, Inference and Learning


Honor Code

You are encouraged to work on your own or in groups in this class. If you or your group get stuck, you may discuss the problem with other students, PROVIDED THAT YOU SUBMIT THEIR NAMES ALONG WITH YOUR ASSIGNMENT. ALL SOLUTIONS MUST BE WRITTEN UP INDEPENDENTLY, HOWEVER. This means that you should never see another student's or group's solution before submitting your own. You may always discuss any problem with me or the TAs. YOU MAY NOT USE OLD SOLUTION SETS UNDER ANY CIRCUMSTANCES. Making your solutions available to other students, EVEN INADVERTENTLY (e.g., by keeping backups on github), is aiding academic fraud, and will be treated as a violation of this honor code.

You are expected to subscribe to the highest standards of academic honesty. This means that every idea that is not your own must be explicitly credited to its author. Failure to do this constitutes plagiarism. Plagiarism includes using ideas, code, data, text, or analyses from any other students or individuals, or any sources other than the course notes, without crediting these sources by name. Any verbatim text that comes from another source must appear in quotes with the reference or citation immediately following. Academic dishonesty will not be tolerated in this class. Any student suspected of academic dishonesty will be reported to the Dean of Students. A typical penalty for a first plagiarism offense is suspension for one quarter. A second offense usually results in dismissal from the University of California.