Non-smooth non-convex optimization for training deep neural networks
| Code | Completion | Credits | Range | Language |
|---|---|---|---|---|
| BE4M36NNO | Z,ZK | 6 | 2P+2C | English |
- Course guarantor:
- Jakub Mareček
- Lecturer:
- Allen Robert Gehret, Jakub Mareček
- Tutor:
- Adam Bosák, Allen Robert Gehret, Andrii Kliachkin, Jakub Mareček
- Supervisor:
- Department of Computer Science
- Synopsis:
- Requirements:
- Syllabus of lectures:
- Syllabus of tutorials:
- Study Objective:
- Study materials:
-
Allen Gehret et al., Deep Learning as the Disciplined Construction of Tame Objects, https://arxiv.org/abs/2509.18025
Lou van den Dries, Tame topology and o-minimal structures, London Mathematical Society Lecture Note Series, vol. 248, Cambridge University Press, Cambridge, 1998.
Damek Davis, Dmitriy Drusvyatskiy, Sham Kakade, and Jason D Lee, Stochastic subgradient method converges on tame functions, Foundations of computational mathematics 20 (2020), no. 1, 119154.
A. D. Ioffe, An invitation to tame optimization, SIAM Journal on Optimization 19 (2008), no. 4, 18941917.
- Note:
- Time-table for winter semester 2025/2026:
- Time-table is not available yet
- Time-table for summer semester 2025/2026:
-
06:00–08:0008:00–10:0010:00–12:0012:00–14:0014:00–16:0016:00–18:0018:00–20:0020:00–22:0022:00–24:00
Mon Tue Wed Thu Fri - The course is a part of the following study plans: