Big Data Technologies
Code | Completion | Credits | Range | Language |
---|---|---|---|---|
BE0M33BDT | Z,ZK | 4 | 2P+1C | English |
- Relations:
- It is not possible to register for the course BE0M33BDT if the student is concurrently registered for or has already completed the course B0M33BDT (mutually exclusive courses).
- During a review of study plans, the course B0M33BDT can be substituted for the course BE0M33BDT.
- It is not possible to register for the course BE0M33BDT if the student is concurrently registered for or has previously completed the course B0M33BDT (mutually exclusive courses).
- Course guarantor:
- Jan Hučín
- Lecturer:
- Jan Hučín, Petr Paščenko, Marek Sušický
- Tutor:
- Alisa Benešová, Jan Hučín, Michal Janeček, Petr Paščenko, Sergii Stamenov, Marek Sušický
- Supervisor:
- Department of Computer Science
- Synopsis:
-
The objective of this elective course is to familiarize students with new trends and technologies for storing, management and processing of Big Data. The course will focus on methods for extraction, analysis as well as a selection of hardware infrastructure for managing persistent and streamed data, such as data from social networks. As part of the course we will present how to apply the traditional methods of artificial intelligence and machine learning to Big Data analysis.
- Requirements:
-
Seminars will be run the standard way. We assume that students will bring their own computers for editing scripts. Calculations will be executed in the computer cluster with remote access. For practical exercises, students will use pre-loaded text database. The seminars will focus on practical application of technology to specific examples. During the semester are scheduled two short tests of subject matter.
- Syllabus of lectures:
-
1. Introduction, Big Data processing motivation, requirements
2. Hadoop overview - all components and how they work together
i) Hadoop Common: The common utilities that support the other Hadoop modules.
ii) Hadoop Distributed File System (HDFS?): A distributed file system that provides high-throughput access to application data.
iii) Hadoop YARN: A framework for job scheduling and cluster resource management.
iv) Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
3. Introduction to MapReduce, how to use pre-installed data. Basic skeleton for running words histogram in Java
4. HDFS, NoSQL databases, HBase, Cassandra, SQL access, Hive,
5. What is Mahout, what are the basic algorithms
6. Streamed data - real time processing
7. Twitter data processing, simple sentiment algorithm
- Syllabus of tutorials:
-
1. Cloud computing cluster OpenStack basic commands, virtualization.
2. Install hadoop, hw requirements, sw requirements, how to administer (create access), introduce to the basic setup on our cluster, how to monitor. Run the words histogram, single thread.
3. The bag of words notion, TF-IDF, run SVD, LDA.
4. Manipulation with data, how to upscale-downscale HDFS, How to run and monitor computation progres, how to organize the computation.
5. Run random forest classification task using the Mahout algorithms, show how much faster is the map reduce implementation compared to single thread on one box.
6. Semester work presentation and zápočet
- Study Objective:
-
The goal of the course is to show on practical examples to the basic methods for processing Big Data. Examples will focus on the statistical data processing.
- Study materials:
-
Hadoop: The Definitive Guide, 4th Edition, by Tom White
- Note:
- Further information:
- https://cw.fel.cvut.cz/wiki/courses/BE0M33BDT
- No time-table has been prepared for this course
- The course is a part of the following study plans: