CZECH TECHNICAL UNIVERSITY IN PRAGUE
STUDY PLANS
2024/2025

# Dynamic Decision Making 1

The course is not on the list Without time-table
Code Completion Credits Range Language
01DYNR1 Z,ZK 3 2P+1C Czech
Garant předmětu:
Lecturer:
Tutor:
Supervisor:
Department of Mathematics
Synopsis:

Design, control and analysis of intelligent agents (or systems) that behave appropriately in various circumstances are highly demanded (artificial intelligence and machine learning, data mining, financial modelling, natural language processing, bioinformatics, web search and information retrieval, algorithm design, system design, network analysis, and more). Such intelligent agents need to reason with uncertain information and limited computational resources. Effective decision making requires the knowledge about:

. the agent's environment and its dynamics (including the presence of other intelligent agents),

. the agent's goals and preferences

. the agent's abilities to observe and influence the environment.

This course introduces dynamic decision making under uncertainty and computational methods supporting decision-making. The course helps to develop the mathematical reasoning skills crucial for areas inherently involving uncertainty. These skills can serve as the foundation for further study in any application area you choose to pursue and may also help you to analyse the uncertainty in your everyday life.

Course objectives:

- Learn the basic ideas and techniques underlying design of intelligent rational agents. A specific emphasis will be on the decision-theoretic modelling paradigm.

- Understand state-of-the-art of decision making (DM).

- Be able to formulate decision making or learning problem and select appropriate method for a given task/application.

- Be able to understand research papers in the field (main conferences: IJCAI, NeurIPS, AAMAS, ICAART, ICM; main journals: AI, JAIR, JAAMAS, IJAR).

- Try out some ideas of your own.

Requirements:

Working knowledge of basic linear algebra (01LAA2, 01LAB2 or equivalent); basic probability and statistics (01PRSTB, 01PRST or equivalent).

Syllabus of lectures:

1. Agent and its environment. Main definitions. Environment classification. Closed-loop and open-loop decision making.

2. Uncertainty and its sources. Coping with uncertainty. Key notions from probability theory applied to AI reasoning. Maximum expected utility principle. Naïve Bayes classifier. Rational agent.

3. Hypothesis testing. Bayesian interference.

4. Sequential DM. DM preferences and their properties. Bayesian networks. Decision trees. Decision networks.

5. Markov decision process (MDP) and its formalization. Transition function. Utility function. Policy. Value of a policy.

6. Solution to MDP. Bellman optimality principle. Dynamic programming. Value iteration and its convergence. Role of discount factor. Policy loss.

7. Policy iteration. Policy evaluation. Policy improvement. Linear programming.

8. Partially observable Markov Decision Processes (POMDP). Solution to POMDP. Pruning and its importance.

9. Supervised and unsupervised learning. Reinforcement learning.

10. Q-learning. SARSA algorithm. Adaptive dynamic programming.

11. Exploration vs exploitation problem and its solutions. Value of information. Transfer learning.

12. Multi-agent systems. Cooperation and negotiation. Elements of Game theory.

Syllabus of tutorials:
Study Objective:

- Learn the basic ideas and techniques underlying design of intelligent rational agents. A specific emphasis will be on the decision-theoretic modelling paradigm.

- Understand state-of-the-art of decision making (DM).

- Be able to formulate decision making or learning problem and select appropriate method for a given task/application.

- Be able to understand research papers in the field (main conferences: IJCAI, NeurIPS, AAMAS, ICAART, ICM; main journals: AI, JAIR, JAAMAS, IJAR).

- Try out some ideas of your own.

Study materials:

The optional readings, unless explicitly specified, come from

[1] S. Russell, P.Norvig: Artificial Intelligence: A Modern Approach, 3rd (2009) or 4th (2020) ed.

[2] R. S. Sutton, A G. Barto: Reinforcement Learning: An Introduction. Second edition MIT Press, 2018.

[3] D.P.Bertsekas. Dynamic Programming and Optimal Control, vol. 1,2. Athena Sci. Press, 2005.

Note:
Further information:
No time-table has been prepared for this course
The course is a part of the following study plans:
Data valid to 2024-05-28
Aktualizace výše uvedených informací naleznete na adrese https://bilakniha.cvut.cz/en/predmet7300706.html