Logo ČVUT
CZECH TECHNICAL UNIVERSITY IN PRAGUE
STUDY PLANS
2025/2026

Deep Learning Essentials

The course is not on the list Without time-table
Code Completion Credits Range Language
BAB33DPL KZ 4 2P+2C Czech
Relations:
It is not possible to register for the course BAB33DPL if the student is concurrently registered for or has already completed the course B0B33DPL (mutually exclusive courses).
It is not possible to register for the course BAB33DPL if the student is concurrently registered for or has already completed the course BECM33DPL (mutually exclusive courses).
The requirement for course BAB33DPL can be fulfilled by substitution with the course BECM33DPL.
The requirement for course BAB33DPL can be fulfilled by substitution with the course B0B33DPL.
Course guarantor:
Karel Zimmermann
Lecturer:
Karel Zimmermann
Tutor:
Karel Zimmermann
Supervisor:
Department of Cybernetics
Synopsis:

The course teaches deep learning methods on known robotic problems, such as semantic segmentation or reactive motion

control. The overall goal is timeless, universal knowledge rather than listing all known deep learning architectures.

Students are assumed to have working prior knowledge of mathematics (gradient, jacobian, hessian, gradient descent,

Taylor polynomial) and machine learning (Bayes risk minimization, linear classifier). The labs are divided into two parts;

in the first one, the students will solve elementary deep ML tasks from scratch (including the reimplementation of autograd

backpropagation), and in the second one, students will build on existing templates in order to solve complex tasks

including RL, vision transformers and generative networks.

Requirements:
Syllabus of lectures:

1. Machine learning 101: model, loss, learning, issues, regression, classification

2. Under the hood of a linear classifier: two-class and multiclass linear classifier on RGB images

3. Under the hood of auto-differentiation: Computational graph of fully connected NN, Vector-Jacobian-Product

(VJP) vs chain rule and multiplication of Jacobians.

4. The story of the cat's brain surgery: cortex + convolutional layer and its Vector-Jacobian-Product (VJP)

5. The loss: MAP and ML estimate, KL divergence and losses.

6. Why is learning prone to fail? - Structural issues: layers + issues, batch-norm, drop-out

7. Why is learning prone to fail? - Optimization issues: optimization vs learning, KL divergence, SGD,

momentum, convergence rate, Adagrad, RMSProp, Adam, diminishing/exploding gradient, oscillation, double

descent

8. What can('t) we do with a deep net? Classification, Segmentation, Detection, Regression

9. Reinforcement learning: Approximated Q-learning, DQN, DDPG, Derivation of the policy gradient, Reward

shaping, Inverse RL, Applications

10. Memory and attention: recurrent nets, Image transformers with attention module

11. Generative models: GANs and diffusion models

12. Implicit layers: Backpropagation through unconstrained and constrained optimization problems, ODE solvers,

root finders, fixed points) + existing end-to-end differentiable modules

Syllabus of tutorials:
Study Objective:
Study materials:

Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT Press, 2016.

Note:
Further information:
No time-table has been prepared for this course
The course is a part of the following study plans:
Data valid to 2026-05-14
For updated information see http://bilakniha.cvut.cz/en/predmet8700806.html