Non-smooth non-convex optimization for training deep neural networks
The course is not on the list Without time-table
| Code | Completion | Credits | Range | Language |
|---|---|---|---|---|
| B4M36NNO | Z,ZK | 6 | 2P+2C | Czech |
- Course guarantor:
- Jakub Mareček
- Lecturer:
- Allen Robert Gehret, Jakub Mareček
- Tutor:
- Adam Bosák, Allen Robert Gehret, Andrii Kliachkin, Jakub Mareček
- Supervisor:
- Department of Computer Science
- Synopsis:
- Requirements:
- Syllabus of lectures:
- Syllabus of tutorials:
- Study Objective:
- Study materials:
-
Allen Gehret et al., Deep Learning as the Disciplined Construction of Tame Objects, https://arxiv.org/abs/2509.18025
Lou van den Dries, Tame topology and o-minimal structures, London Mathematical Society Lecture Note Series, vol. 248, Cambridge University Press, Cambridge, 1998.
Damek Davis, Dmitriy Drusvyatskiy, Sham Kakade, and Jason D Lee, Stochastic subgradient method converges on tame functions, Foundations of computational mathematics 20 (2020), no. 1, 119154.
A. D. Ioffe, An invitation to tame optimization, SIAM Journal on Optimization 19 (2008), no. 4, 18941917.
- Note:
- Further information:
- No time-table has been prepared for this course
- The course is a part of the following study plans: