Instructor: Ben Recht
Time: Tu/Th 3:30-5:00 PM
Location: 306 Soda Hall
GSIs: Jessica Dai, Brian Lee
This course will explore how patterns in data support predictions and consequential actions. Starting with the foundations of prediction, we look at the foundational optimization theory used to automate decision-making. We then turn to supervised learning, covering representation, optimization, and generalization as its key constituents. We will discuss datasets as benchmarks, examining their histories and scientific bases. We will then cover the related principles of statistical evaluation, drawing a through line from confidence intervals to AB testing to bandits to reinforcement learning. Throughout the course, we will draw upon connections to historical context, contemporary practice, and societal impact.
Required background: The prerequisites are previous coursework in linear algebra, multivariate calculus, probability and statistics. Some degree of mathematical maturity is also required. Numerical programming will be required for this course, though I am told anyone can do this with AI now. Let's find out.
Text: Patterns, Predictions, and Actions: A Story About Machine Learning by Moritz Hardt and Benjamin Recht. Available online. You can purchase a physical copy at the bookstore. You can also order online at Amazon or from Princeton University Press.
Blog: Ben will host a Class Live Blog on argmin.net.
For enrolled students: Detailed information regarding assignments, assessments, and logistics can be found on bcourses.
Homework 1. (Due September 12)
08-28-25 - Introduction, logistics, randomness
Readings: PPA Chapter 1,Mathematical Appendix
Slides (I'm not sure if these are helpful without my narration.)
09-02-25 - Making and evaluating predictions
Readings: Course notes on the rudiments of prediction
09-04-25 - From populations to samples (and back?)
Readings: Recht, Benjamin. (2025) “The Actuary’s Final Word on Algorithmic Decision-Making” arxiv:2509.04546.
Course notes on prediction from samples (without features).
Blog: Your noise is my signal
Blog 2: The Actuary's Final Word
09-09-25 - Decision Theory
Readings: PPA Chapter 2.
Blog: Justify your answer
09-11-25 - Errors, Operating Characteristics, and Tradeoffs.
Readings: PPA Chapter 2.
Blog: Stuck in the middle
09-16-25 - Competing metrics and fairness
Readings: PPA Chapter 2.
Hardt et al. Equality of Opportunity in Supervised Learning.
Kleinberg et al. Inherent Trade-Offs in the Fair Determination of Risk Scores.
09-23-25 - The Perceptron
Readings: PPA Chapter 3.
Blog: Common Descent.
09-25-25 - Representing data.
Readings: PPA Chatper 4.
Blog: Boxes of numbers.
09-30-25 - Nonlinear Lifts, Kernels, and Neural Nets.
Readings: PPA Chatper 4.
Blog: Universal Cascades.
10-02-25 - Stochastic Gradient Descent.
Readings: PPA Chapter 5.
Blog: Highly optimized optimizers.
10-07-25 - Stochastic Gradient Descent Analysis.
Readings: PPA Chapter 5.
Blog: Minimal Theory.
10-09-25 - Generalization and Train-test Splits.
10-14-25 - Data sets and competitive testing
Readings: Chapter 8
10-16-25 - The robustness and fragility of training on the test set
Readings: Chapter 8
Blog: Desirable Sins
10-21-25 - Uncertainty Quantification
10-23-25 - Prediction Intervals
Blog: Maybe You're Wrong
The infamous Vovk paper on conditional conformal prediction.
11-04-25 - Randomized experiments
Readings: Chapter 9
11-06-25 - Observational studies
Readings: Chapter 9
Blog: Staging Interventions
Relevant old blog: Fractions or Laws of Nature?
Additional Reading: Recht, Benjamin. (2025) “A Bureaucratic Theory of Statistics.” Observational Studies 11(1), 77-84.
Some slides on the bureaucratic nature of RCTs.
11-13-25 - Adaptive Experiments
Readings: Chapter 12
The algorithms in this class are adapted from: Auer, P., Ortner, R. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Period Math Hung 61, 55–65 (2010).
11-18-25 - Algorithms for online learning
Readings: Chapter 12
Blog: Actions from Predictions
11-20-25 - Generative models
Blog: Digitally Twinning
12-02-25 - Reinforcement learning
Readings: Chapter 12
Blog 1: Reformist Reinforcement Learning
Blog 2: Random Search for Random Search
Blog 4: There’s got to be a better way!
Davis, D. and Recht, B. “What is the objective of reasoning with reinforcement learning?” arXiv:2510.13651
12-04-25 - Course Reflections