Logo ČVUT
CZECH TECHNICAL UNIVERSITY IN PRAGUE
STUDY PLANS
2025/2026

Neural Networks 1

Display time-table
Code Completion Credits Range Language
18YNES1 KZ 5 2P+2C English
Course guarantor:
Lecturer:
Zuzana Petříčková
Tutor:
Zuzana Petříčková
Supervisor:
Department of Software Engineering
Synopsis:

The aim of the course „Neural Networks 1“ is to acquaint students with basic models of artificial neural networks, algorithms for their learning, and other related machine learning techniques. The goal is to teach students how to apply these models and methods to solve practical tasks.

Requirements:

1) Completion of a project and its in-person presentation during the practical session

2) Oral examination on the covered theory

3) Regular active participation in practical sessions and homeworks are voluntary, but also contribute to the final grade. In justified cases (e.g., timetable conflicts), independent preparation and individual consultations are possible, subject to prior agreement with the instructor.

Mandatory Deadlines

1) The student shall select a project topic and obtain the instructors approval no later than 30 April 2026.

2) The student shall submit and present the project in front of other students during an exercise session on a date arranged in advance, but no later than the week commencing 18 May 2026.

3) Should the project not be completed on time, the student shall present its current state during the exercise session. The final version must subsequently be submitted and defended during an individual consultation no later than 11 September 2026.

4) The dates for the oral examination will be published in the KOS.

Details: http://zuzka.petricek.net/vyuka_2025/YNES1_2026/credits.php

Syllabus of lectures:

1. Introduction to Artificial Neural Networks. History, biological motivation, learning and machine learning. Machine learning. Types of tasks. Solving a machine learning task.

2. Perceptron. Mathematical model of a neuron and its geometric interpretation. Early models of neural networks: perceptrons with step activation function. Representation of logical functions using perceptrons and perceptron networks. Examples.

3. Perceptron. Learning algorithms (Hebb, Rosenblatt,...). Linear separability. Linear classification. Overview of basic transfer functions for neurons.

4. Linear neuron and the task of linear regression. Learning algorithms (least squares method, pseudoinverse, gradient method, regularization). Linear neural network, linear regression, logistic regression.

5. Single-layer neural network. Model description, transfer and error functions, gradient learning method and its variants. Associative memories, recurrent associative memories. Types of tasks, training data.

6. Feedforward neural network. Backpropagation algorithm, derivation, variants, practical applications.

7. Analysis of layered neural network model (learning rate and approximation capability, ability to generalize). Techniques that accelerate learning and techniques that enhance the model's generalization ability. Shallow vs. deep layered neural networks.

8. Clustering and self-organizing artificial neural networks. K-means algorithm, hierarchical clustering.

9. Competitive models, Kohonen maps, learning algorithms. Hybrid models (LVQ, Counter-propagation, RBF model, modular neural networks).

10-11. Convolutional neural networks. Convolution operations. Architecture. Typical tasks. Transfer learning.

12. Vanilla recurrent neural networks. Processing sequential data.

13. Probabilistic models (Hopfield network, simulated annealing, Boltzmann machine).

Syllabus of tutorials:

The syllabus corresponds to the structure of the lectures.

Study Objective:

Students shall learn various fundamental models of artificial neural networks, algorithms for their learning, and other related machine learning methods (perceptrons, linear models, support vector machines, feed-forward neural networks, clustering, self-organizing neural networks, associative networks, basics of deep learning).

They will learn how to implement and apply the discussed models and methods to solve practical tasks.

Prior requirements: Basic knowledge of algebra, calculus and programming techniques.

Study materials:

Recommended literature:

[1] F. Chollet, M. Watson: Deep Learning with Python, Second Edition, 2021 (Third Edition - 2025).

[2] M. Nielson: Neural Networks and Deep Learning, 2019.

[3] R. Rojas: Neural Networks: A Systematic Introduction, Springer-Verlag, Berlin, 1996

[4] S. Haykin: Neural Networks, Macmillan, New York, 1994.

[5] L.V. Fausett: Fundamentals of Neural Networks: Architectures, Algorithms and Applications, Prentice Hall, New Jersey, 1994.

[6] I. Goodfellow, Y. Bengio, A. Courville: Deep Learning, MIT Press, 2016.

Note:
Further information:
http://zuzka.petricek.net/
Time-table for winter semester 2025/2026:
Time-table is not available yet
Time-table for summer semester 2025/2026:
Time-table is not available yet
The course is a part of the following study plans:
Data valid to 2026-02-17
For updated information see http://bilakniha.cvut.cz/en/predmet8301006.html