This lecture generalizes an idea that we saw in the previous one: Learning can be framed as an optimization problem and can be declaratively stated as minimizing empirical risk.

The lecture ends with a discussion of model selection, the bias-variance tradeoff and cross validation.

### Lectures

- Videos:
- Older videos:

### Links

- Chapter 7 from The Elements of Statistical Learning, by Hastie, Tibshirani and Friedman. (Available online)

### Additional reading

- Pedro Domingos, A Unified Bias-Variance Decomposition