Applications of Optimization Methods
- Department of Mathematics
Aim of this course is to enhance the knowledge of the optimization methods and show their practical applications. Number
of methods are applied on the support-vector machines and subsequently, methods for large problems and training of deep
artificial neural networks are explained. Finaly, advanced methods for regret minimization or sparsity inducing methods
are explained. All methods are demonstrated on real problems.
- Syllabus of lectures:
1. Introduction to advanced optimization methods.
2. Support-vector mechines.
3. Artificial neural networks in optimization.
4. Hessian inverse approximation, BFGS method.
5. Stochastic gradient descent.
6. Convex optimizations for regret minimization.
7. Sparsity-inducing optimization methods.
- Syllabus of tutorials:
- Study Objective:
- Study materials:
 H. J. Kochenderfer, T. A. Wheeler, Algorithms for Optimization, The MIT Press, 2019.
 S. Sra, S. Nowozin, S. J. Wright, Optimization for Machine Learning, The MIT Press, 2012.
 D. P. Bertsekas, Convex Optimization Algorithms, Athena Scientific, 2015.
 I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, The MIT Press, 2016.
 Ch. C. Aggarwal, Neural Networks and Deep Learning, Springer, 2018.
- Further information:
- No time-table has been prepared for this course
- The course is a part of the following study plans: