AI and Society
Code | Completion | Credits | Range | Language |
---|---|---|---|---|
BECM36AIS | ZK | 6 | 13P+13C | English |
- Course guarantor:
- Lecturer:
- Tutor:
- Supervisor:
- Department of Computer Science
- Synopsis:
-
The course introduces students to topics that combine technical understanding of ML/AI safety and security with social
and philosophical dimensions of ML/AI. The focus is on explaining limitations of ML/AI in high-risk scenarios and on
helping students understand how to design robust, fair, and accountable ML/AI lifecycles that address societal concerns
over technology. The course will also show students how to navigate the complex regulatory environment emerging in
response to rising concerns over impacts of ML/AI on society.
- Requirements:
- Syllabus of lectures:
-
1. Open v. close development in ML/AI and its security implications
2. Learning from observations in the causal world: What does it mean for robustness?
3. Alignment of ML models and lessons learned: social choice theory
4. Fairness, bias, and other normative issues impacting social acceptability of ML/AI
5. Foundations for safety and security of ML: how to reason about the open world?
6. Sociotechnical vulnerabilities of ML/AI
7. ML/AI policy and regulatory approaches
8. Epistemology of inductive inference and ML/AI: a tale of two traditions
9. Ethics of ML/AI development practices
10. The real world misuseability of generative models
11. Social accountability of corporate ML/AI development
12. Philosophical origins of the existential risks AI debate
Studijní literatura a studijní pomůcky
Hendrycks, D., Carlini, N., Schulman, J., Steinhardt, J. (20
- Syllabus of tutorials:
- Study Objective:
- Study materials:
-
* Hendrycks, D., Carlini, N., Schulman, J., Steinhardt, J. (2022). Unsolved Problems in ML Safety. https://arxiv.org/abs/2109.13916.
* Ashmore, R., Calinescu, R., Paterson, C. (2021). Assuring the Machine Learning Lifecycle:
* Desiderata, Methods, and Challenges. ACM Computing Surveys 54(5).
* Casper S et al. (2023) Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. https://arxiv.org/pdf/2307.15217.
* Conitzer V et al. (2024) Social Choice for AI Alignment: Dealing with Diverse Human Feedback. https://arxiv.org/abs/2404.10271.
* Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H.,
* Crawford, K. (2021). Datasheets for datasets. Communications of the ACM 64(12).
* Barocas S, Hardt M, Narayanan A (2023) Fairness and Machine Learning: Limitations and Opportunities. Cambridge, MA: The MIT Press.
* Mökander, J., Axente, M., Casolari, F. et al. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds & Machines (2021). https://doi.org/10.1007/s11023-021-09577-4.
* Weidinger, L. et al. (2023) Sociotechnical Safety Evaluation of Generative AI Systems.https://arxiv.org/abs/2310.11986.
* Moynihan, T. (2020). Existential risk and human extinction: An intellectual history. Futures 116(102495).
- Note:
-
The course is taught at the Faculty of Social Sciences of Charles University.
- Further information:
- No time-table has been prepared for this course
- The course is a part of the following study plans:
-
- prg.ai Master (compulsory course in the program)