The following is a tentative list (a superset, actually) of the topics to be covered in the class.

Note: This list will change as the semester progresses. For a listing of lectures, visit the schedule page.

  1. Introduction
    • Why we need neuro-symbolic methods? Examples
    • A survey of how neural and symbolic methods can interact with each other
    • Technical challenges
  2. Review of neural networks
    • Computation graphs, loss functions
    • Transformer networks and language models
  3. Review of symbolic logic
    • propositional logic and SAT
    • tractable representations
    • knowledge compilation
  4. An overview of different approaches for injecting symbolic knowledge into neural networks
    • Data augmentation with declarative knowledge
    • Logic-as-loss with weighted model counting
    • Logic-as-loss with soft/fuzzy logic
    • Structured prediction and probabilistic reasoning
    • Reinforcement learning, using blackbox programs
  5. Training models with logic with model counting
    • Circuits
    • Weighted model counting
    • Semantic loss
    • Applications
  6. Training models with logic with soft logic
    • Multi-valued logic and t-norms
    • Using relaxed logic for designing models
    • Using relaxed logic for designing loss functions
    • Applications
  7. Symbolic methods for reasoning over model predictions
    • Structured prediction and inference
    • Statistical relation learning
    • MAXSAT based methods
    • Integer programming based methods
  8. Reinforcement learning for neuro-symbolic modeling
    • The REINFORCE algorithm
    • Using agents
  9. Case studies and applications