Logo ČVUT
CZECH TECHNICAL UNIVERSITY IN PRAGUE
STUDY PLANS
2025/2026

AI and Society

Display time-table
Code Completion Credits Range Language
BECM36AIS ZK 6 1P+1C English
Course guarantor:
Vít Střítecký
Lecturer:
Vít Střítecký
Tutor:
Vít Střítecký
Supervisor:
Department of Computer Science
Synopsis:

The course introduces students to topics that combine technical understanding of ML/AI safety and security with social

and philosophical dimensions of ML/AI. The focus is on explaining limitations of ML/AI in high-risk scenarios and on

helping students understand how to design robust, fair, and accountable ML/AI lifecycles that address societal concerns

over technology. The course will also show students how to navigate the complex regulatory environment emerging in

response to rising concerns over impacts of ML/AI on society.

Requirements:
Syllabus of lectures:

1. Open v. close development in ML/AI and its security implications

2. Learning from observations in the causal world: What does it mean for robustness?

3. Alignment of ML models and lessons learned: social choice theory

4. Fairness, bias, and other normative issues impacting social acceptability of ML/AI

5. Foundations for safety and security of ML: how to reason about the open world?

6. Sociotechnical vulnerabilities of ML/AI

7. ML/AI policy and regulatory approaches

8. Epistemology of inductive inference and ML/AI: a tale of two traditions

9. Ethics of ML/AI development practices

10. The real world misuseability of generative models

11. Social accountability of corporate ML/AI development

12. Philosophical origins of the existential risks AI debate

Studijní literatura a studijní pomůcky

Hendrycks, D., Carlini, N., Schulman, J., Steinhardt, J. (20

Syllabus of tutorials:
Study Objective:
Study materials:

* Hendrycks, D., Carlini, N., Schulman, J., Steinhardt, J. (2022). Unsolved Problems in ML Safety. https://arxiv.org/abs/2109.13916.

* Ashmore, R., Calinescu, R., Paterson, C. (2021). Assuring the Machine Learning Lifecycle:

* Desiderata, Methods, and Challenges. ACM Computing Surveys 54(5).

* Casper S et al. (2023) Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. https://arxiv.org/pdf/2307.15217.

* Conitzer V et al. (2024) Social Choice for AI Alignment: Dealing with Diverse Human Feedback. https://arxiv.org/abs/2404.10271.

* Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H.,

* Crawford, K. (2021). Datasheets for datasets. Communications of the ACM 64(12).

* Barocas S, Hardt M, Narayanan A (2023) Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: The MIT Press.

* Mökander, J., Axente, M., Casolari, F. et al. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds &

achines (2021). https://doi.org/10.1007/s11023-021-09577-4.

* Weidinger, L. et al. (2023) Sociotechnical Safety Evaluation of Generative AI Systems.https://arxiv.org/abs/2310.11986.

* Moynihan, T. (2020). Existential risk and human extinction: An intellectual history. Futures 116(102495).

Note:

The course is taught at the Faculty of Social Sciences of Charles University.

Time-table for winter semester 2025/2026:
06:00–08:0008:00–10:0010:00–12:0012:00–14:0014:00–16:0016:00–18:0018:00–20:0020:00–22:0022:00–24:00
Mon
room
Střítecký V.
12:45–14:15
(lecture parallel1)
room
Střítecký V.
14:30–16:00
(lecture parallel1
parallel nr.101)

Tue
Wed
Thu
Fri
Time-table for summer semester 2025/2026:
Time-table is not available yet
The course is a part of the following study plans:
Data valid to 2025-06-10
For updated information see http://bilakniha.cvut.cz/en/predmet8311106.html