Neural Networks 1
Code | Completion | Credits | Range | Language |
---|---|---|---|---|
18NES1 | KZ | 5 | 2P+2C | Czech |
- Course guarantor:
- Lecturer:
- Zuzana Petříčková
- Tutor:
- Zuzana Petříčková
- Supervisor:
- Department of Software Engineering
- Synopsis:
-
The aim of the course „Neural Networks 1“ is to acquaint students with basic models of artificial neural networks, algorithms for their learning, and other related machine learning techniques. The goal is to teach students how to apply these models and methods to solve practical tasks.
- Requirements:
-
Basic knowledge of algebra, calculus and programming techniques.
- Syllabus of lectures:
-
1. Introduction to Artificial Neural Networks. History, biological motivation, learning and machine learning. Machine learning. Types of tasks. Solving a machine learning task.
2. Perceptron. Mathematical model of a neuron and its geometric interpretation. Early models of neural networks: perceptrons with step activation function. Representation of logical functions using perceptrons and perceptron networks. Examples.
3. Perceptron. Learning algorithms (Hebb, Rosenblatt,...). Linear separability. Linear classification. Overview of basic transfer functions for neurons.
4. Linear neuron and the task of linear regression. Learning algorithms (least squares method, pseudoinverse, gradient method, regularization). Linear neural network, linear regression, logistic regression.
5. Single-layer neural network. Model description, transfer and error functions, gradient learning method and its variants. Associative memories, recurrent associative memories. Types of tasks, training data.
6. Feedforward neural network. Backpropagation algorithm, derivation, variants, practical applications.
7. Analysis of layered neural network model (learning rate and approximation capability, ability to generalize). Techniques that accelerate learning and techniques that enhance the model's generalization ability. Shallow vs. deep layered neural networks.
8. Clustering and self-organizing artificial neural networks. K-means algorithm, hierarchical clustering.
9. Competitive models, Kohonen maps, learning algorithms. Hybrid models (LVQ, Counter-propagation, RBF model, modular neural networks).
10-11. Convolutional neural networks. Convolution operations. Architecture. Typical tasks. Transfer learning.
12. Vanilla recurrent neural networks. Processing sequential data.
13. Probabilistic models (Hopfield network, simulated annealing, Boltzmann machine).
- Syllabus of tutorials:
-
The syllabus corresponds to the structure of the lectures.
- Study Objective:
-
Students shall learn various fundamental models of artificial neural networks, algorithms for their learning, and other related machine learning methods (perceptrons, linear models, support vector machines, feed-forward neural networks, clustering, self-organizing neural networks, associative networks, basics of deep learning).
They will learn how to implement and apply the discussed models and methods to solve practical tasks.
- Study materials:
-
Recommended literature:
[1] R. Rojas: Neural Networks: A Systematic Introduction, Springer-Verlag, Berlin, 1996
[2] S. Haykin: Neural Networks, Macmillan, New York, 1994.
[3] L.V. Fausett: Fundamentals of Neural Networks: Architectures, Algorithms and Applications, Prentice Hall, New Jersey, 1994.
[4] I. Goodfellow, Y. Bengio, A. Courville: Deep Learning, MIT Press, 2016.
[5] E. Volná, Neuronové sítě 1, Ostrava, 2008
- Note:
- Time-table for winter semester 2024/2025:
- Time-table is not available yet
- Time-table for summer semester 2024/2025:
- Time-table is not available yet
- The course is a part of the following study plans:
-
- Aplikace informatiky v přírodních vědách (elective course)
- Aplikované matematicko-stochastické metody (elective course)
- Jaderná a částicová fyzika (elective course)
- Matematické inženýrství - Matematická informatika (elective course)
- Nuclear and Particle Physics (elective course)