Regression
Lecturer: Fabian
Wauthier
Date: Sept 10
[Lecture slides in PDF]
- Hastie, T., Tibshirani, R. & Friedman, J. (2009). The Elements of
Statistical Learning: Data Mining, Inference, and Prediction (Second
Edition), NY: Springer.
This contains a very accessible discussion of linear regression and
extensions. It details the Gauss-Markov theorem which states that the
least squares solution is the Best Linear Unbiased Estimator (BLUE) of
the regression parameter.
-
Bishop, C. M. (2006): Pattern Recognition and Machine Learning,
NY: Springer.
Section 3.1 is also a very readable discussion of linear basis
function models. It also covers the LMS algorithm and touches on
regularised least squares.
- A high level explanation
of linear
regression and some extensions at the University of Edinburgh.
-
This short
note contains another proof of the Gauss-Markov theorem.
- Many linear models can be kernelized using the so-called "kernel
trick". This allows us to implicitly work with a very large feature
representation without having to explicitly represent
it. This
note shows how to do this for ridge regression. The notes on his
website are quite useful as quick references in general.