The following is a tentative list (a superset, actually) of the topics to be covered in the class.
Note: This list will change as the semester progresses. For a listing of lectures, visit the schedule page.
- Introduction
- Why we need neuro-symbolic methods? Examples
- A survey of how neural and symbolic methods can interact with each other
- Technical challenges
- Review of neural networks
- Computation graphs, loss functions
- Transformer networks and language models
- Review of symbolic logic
- propositional logic and SAT
- tractable representations
- knowledge compilation
- An overview of different approaches for injecting symbolic knowledge into
neural networks
- Data augmentation with declarative knowledge
- Logic-as-loss with weighted model counting
- Logic-as-loss with soft/fuzzy logic
- Structured prediction and probabilistic reasoning
- Reinforcement learning, using blackbox programs
- Training models with logic with model counting
- Circuits
- Weighted model counting
- Semantic loss
- Applications
- Training models with logic with soft logic
- Multi-valued logic and t-norms
- Using relaxed logic for designing models
- Using relaxed logic for designing loss functions
- Applications
- Symbolic methods for reasoning over model predictions
- Structured prediction and inference
- Statistical relation learning
- MAXSAT based methods
- Integer programming based methods
- Reinforcement learning for neuro-symbolic modeling
- The REINFORCE algorithm
- Using agents
- Case studies and applications