BA
in Mathematics, Section C
|
Hilary Term 2006 Nick Gould |
Previous notes
Last time this course ran, Raphael Hauser produced the following set of notes and slides. These are excellent background reading.Chapter I: Unconstrained Optimization.
- Lecture 1: introduction and preliminaries. Notes, slides.
- Lecture 2: the descent method and line searches. Notes, slides.
- Lecture 3: steepest descent and Newton methods. Notes, slides.
- Lecture 4: quasi-Newton methods. Notes, slides.
- Lecture 5: conjugate gradients and the Fletcher-Reeves method. Notes, slides.
- Lecture 6: trust region methods. Notes, slides.
- Lecture 7: the dogleg and Steihaug methods. Notes, slides
Chapter II: Constrained Optimization
- Lecture 8: the fundamental theorem of linear inequalities. Notes, slides.
- Lecture 9: first order necessary optimality conditions (KKT). Notes, slides.
- Lecture 10: second order optimality conditions. Notes, slides.
- Lecture 11: the method of Lagrange multipliers, examples. Notes.
- Lecture 12: Lagrangian Duality and Convex Programming. Notes, slides.
- Lecture 13: the penalty function method. Notes, slides.
- Lecture 14: the augmented Lagrangian method. Notes, slides.
- Lecture 15: the barrier method for nonlinear programming. Notes, slides.
- Lecture 16: primal-dual path-following for linear programming. Notes, slides.
Last updated 3 January 2006 at 08:35