Logo ČVUT
CZECH TECHNICAL UNIVERSITY IN PRAGUE
STUDY PLANS
2025/2026

Deep Learning Essentials

The course is not on the list Without time-table
Code Completion Credits Range Language
BECM33DPL Z,ZK 6 2P+2C English
Relations:
During a review of study plans, the course B0B33DPL can be substituted for the course BECM33DPL.
It is not possible to register for the course BECM33DPL if the student is concurrently registered for or has previously completed the course B0B33DPL (mutually exclusive courses).
Course guarantor:
Karel Zimmermann
Lecturer:
Karel Zimmermann
Tutor:
Karel Zimmermann
Supervisor:
Department of Cybernetics
Synopsis:

The course teaches deep learning methods on known robotic problems, such as semantic segmentation or reactive motion control. The overall goal is timeless, universal knowledge rather than listing all known deep learning architectures. Students are assumed to have working prior knowledge of mathematics (gradient, jacobian, hessian, gradient descent, Taylor polynomial) and machine learning (Bayes risk minimization, linear classifier). The labs are divided into two parts; in the first one, the students will solve elementary deep ML tasks from scratch (including the reimplementation of autograd backpropagation), and in the second one, students will build on existing templates in order to solve complex tasks including RL, vision transformers and generative networks.

Requirements:
Syllabus of lectures:

1. Machine learning 101: model, loss, learning, issues, regression, classification

2. Under the hood of a linear classifier: two-class and multiclass linear classifier on RGB images

3. Under the hood of auto-differentiation: Computational graph of fully connected NN, Vector-Jacobian-Product (VJP) vs chain rule and multiplication of Jacobians.

4. The story of the cat's brain surgery: cortex + convolutional layer and its Vector-Jacobian-Product (VJP)

5. The loss: MAP and ML estimate, KL divergence and losses.

6. Why is learning prone to fail? - Structural issues: layers + issues, batch-norm, drop-out

7. Why is learning prone to fail? - Optimization issues: optimization vs learning, KL divergence, SGD, momentum, convergence rate, Adagrad, RMSProp, Adam, diminishing/exploding gradient, oscillation, double descent

8. What can('t) we do with a deep net? Classification, Segmentation, Detection, Regression

9. Reinforcement learning: Approximated Q-learning, DQN, DDPG, Derivation of the policy gradient, Reward shaping, Inverse RL, Applications

10. Memory and attention: recurrent nets, Image transformers with attention module

11. Generative models: GANs and diffusion models

12. Implicit layers: Backpropagation through unconstrained and constrained optimization problems, ODE solvers, root finders, fixed points) + existing end-to-end differentiable modules

Syllabus of tutorials:
Study Objective:

The course teaches deep learning methods on known robotic problems, such as semantic segmentation or reactive motion control. The overall goal is timeless, universal knowledge rather than listing all known deep learning architectures. Students are assumed to have working prior knowledge of mathematics (gradient, jacobian, hessian, gradient descent, Taylor polynomial) and machine learning (Bayes risk minimization, linear classifier). The labs are divided into two parts; in the first one, the students will solve elementary deep ML tasks from scratch (including the reimplementation of autograd backpropagation), and in the second one, students will build on existing templates in order to solve complex tasks including RL, vision transformers and generative networks.

Study materials:

Ian Goodfellow and Yoshua Bengio and Aaron Courville, Deep learning, MIT press, 2016, http://www.deeplearningbook.org

F. Fleuret. The Little Book of Deep Learning. lulu.com, 2023.

Note:
Further information:
No time-table has been prepared for this course
The course is a part of the following study plans:
Data valid to 2025-04-07
For updated information see http://bilakniha.cvut.cz/en/predmet8247706.html