In this lecture, we will look at formal models of learnability. We have already seen one such model, namely the mistake bound model. Now, we will look Probably Approximately Correct learning.
Lecture slides
 The Theory of Generalization
 An Analysis of Conjunction Learning
 PAC learning: Definition
 Occam’s Razor
 Positive and negative learnability results
 Agnostic Learning
 Shattering and the VC dimension
Lecture videos
 The theory of generalization
 PAC Learning: lecture 1, lecture 2
 Occam’s razor: lecture 1, lecture 2
 Positive and negative learnability results: lecture 1, lecture 2
 Agnostic learning
 Shattering and the VC dimension: lecture 1, lecture 2
Older videos
 The theory of generalization: [spring 2023], [fall 2018], [fall 2017]
 An analysis of conjunction learning: [fall 2018], [fall 2017]
 PAC learning definition: [spring 2023], [fall 2018], [fall 2017]
 Occam’s Razor: [spring 2023], [fall 2018], [fall 2017]
 Positive and negative learnability results: [spring 2023], [fall 2018], [fall 2017]
 Agnostic learning: [spring 2023 (1/2)], [spring 2023(2/2)], [fall 2018], [fall 2017]
 Shattering and the VC dimension: [spring 2023 (1/2)], spring 2023 (2/2)], [fall 2018], [fall 2017]
Links and Resources

Chapter 10 of of Hal Daumé III, A Course in Machine Learning (available online)

Chapter 7 of Tom Mitchell’s book

Chapter 6 of Hopcroft and Kannan’s Foundations of Data Science (available online)

Chapters 3, 4, 6 of Shai ShalevShwartz and Shai BenDavid, Understanding Machine Learning: From Theory to Algorithms (Available online)