Symbolic Machine Learning
Code | Completion | Credits | Range | Language |
---|---|---|---|---|
B4M36SMU | Z,ZK | 6 | 2P+2C | Czech |
- Relations:
- It is not possible to register for the course B4M36SMU if the student is concurrently registered for or has already completed the course BE4M36SMU (mutually exclusive courses).
- The requirement for course B4M36SMU can be fulfilled by substitution with the course BE4M36SMU.
- It is not possible to register for the course B4M36SMU if the student is concurrently registered for or has previously completed the course BE4M36SMU (mutually exclusive courses).
- Course guarantor:
- Lecturer:
- Tutor:
- Supervisor:
- Department of Computer Science
- Synopsis:
-
This course consists of four parts. The first part of the course will explain methods through which an intelligent agent can learn by interacting with its environment, also known as reinforcement learning. This will include deep reinforcement learning. The second part focuses on Bayesian networks, specifically methods for inference. The third part will cover fundamental topics from natural language learning, starting from the basics and ending with state-of-the-art architectures such as transformer. Finally, the last part will provide an introduction to several topics from the computational learning theory, including the online and batch learning settings.
- Requirements:
-
Students can get a maximum of 100 points which is the sum of the projects score and the exam score.
A minimum of 25 (out of 50) exam points is required to pass the exam.
A minimum of 25 (out of 50) projects points is required to obtain an assessment.
- Syllabus of lectures:
-
1. Reinforcement Learning - Markov decision processes
2. Reinforcement Learning - Model-free policy evaluation
3. Reinforcement Learning - Model-free control
4. Reinforcement Learning - Deep reinforcement learning
5. Bayesian Networks - Intro
6. Bayesian Networks - Variable elimination, importance sampling
7. Natural Language Processing 1
8. Natural Language Processing 2
9. Natural Language Processing 3
10. Natural Language Processing 4
11. Computational Leaning Theory 1
12. Computation Learning Theory 2
13. Computational Learning Theory 3.
14. Course Wrap Up
- Syllabus of tutorials:
-
1. Reinforcement Learning - Markov decision processes
2. Reinforcement Learning - Model-free policy evaluation
3. Reinforcement Learning - Model-free control
4. Reinforcement Learning - Deep reinforcement learning
5. Bayesian Networks - Intro
6. Bayesian Networks - Variable elimination, importance sampling
7. Natural Language Processing 1
8. Natural Language Processing 2
9. Natural Language Processing 3
10. Natural Language Processing 4
11. Computational Leaning Theory 1
12. Computation Learning Theory 2
13. Computational Learning Theory 3.
14. Course Wrap Up
- Study Objective:
- Study materials:
-
R. S. Sutton, A. G. Barto: Reinforcement learning: An introduction. MIT press, 2018.
D. Jurafsky & J. H. Martin: Speech and Language Processing - 3rd edition draft
M. J. Kearns, U. Vazirani: An Introduction to Computational Learning Theory, MIT Press 1994
- Note:
- Further information:
- https://cw.fel.cvut.cz/wiki/courses/smu/start
- No time-table has been prepared for this course
- The course is a part of the following study plans:
-
- Medical electronics and bioinformatics (compulsory elective course)
- Open Informatics - Artificial Intelligence (compulsory course of the specialization)
- Open Informatics - Bioinformatics (compulsory course of the specialization)
- Open Informatics - Data Science (compulsory course of the specialization)
- Medical electronics and bioinformatics (compulsory elective course)
- Medical electronics and bioinformatics (compulsory elective course)
- Medical electronics and bioinformatics (compulsory elective course)