Parallelism and Concurrency
Data is displayed for academic year: 2023./2024.
Laboratory exercises
Course Description
The course introduces students to the basic concepts of parallel computer systems, concurrency, and parallelism. The course explains the types of parallelism at low and higher abstraction levels: instructional, functional (task), data and pipeline. The models of execution in parallel systems, parallel programming models, synchronization, coherence and shared-memory concurrency are covered. Learned concepts are applied to design and optimize the performance of programs for parallel systems.
Study Programmes
University undergraduate
[FER3-EN] Computing - study
Elective Courses
(5. semester)
[FER3-EN] Electrical Engineering and Information Technology - study
Elective Courses
(5. semester)
Learning Outcomes
- Recognize the types of parallelism in computer systems
- Recognize the models of execution in parallel systems.
- Recognize the concept of concurrecy and distinguish it from the concept of paralellism.
- Recognize the concepts of coherence, synchronization and memory models in parallel systems.
- Apply learned concepts to decompose simple problems for parallel execution.
- Apply learned concepts to performance optimizations of programs..
Forms of Teaching
Lectures
Lectures, teaching materials available, theoretical and practical coverage of weekly topics.
Independent assignmentsProject assignment covering the course topics.
LaboratoryPractical assignments covering the specific topic.
Grading Method
Continuous Assessment | Exam | |||||
---|---|---|---|---|---|---|
Type | Threshold | Percent of Grade | Threshold | Percent of Grade | ||
Laboratory Exercises | 50 % | 30 % | 50 % | 30 % | ||
Class participation | 0 % | 5 % | 0 % | 5 % | ||
Seminar/Project | 50 % | 25 % | 50 % | 25 % | ||
Final Exam: Written | 50 % | 40 % | ||||
Exam: Written | 50 % | 40 % |
Week by Week Schedule
- Multiple simultaneous computations, Goals of parallelism (e.g., throughput) versus concurrency (e.g., controlling access to shared resources)
- Parallelism, communication, and coordination, Goals and basic models of parallelism
- Shared Memory, Atomicity, Symmetric multiprocessing (SMP)
- Multicore processors, Shared vs; distributed memory
- SIMD, vector processing, GPU, co-processing
- Programming constructs for parallelism
- Task-based decomposition
- Midterm exam
- Data-parallel decomposition
- Programming errors not found in sequential programming
- Models for parallel program performance
- Evaluating communication overhead
- Load balancing
- Actors and reactive processes (e.g., request handlers)
- Final exam
Literature
John L. Hennessy, David A. Patterson (2017.), Computer Architecture, Morgan Kaufmann
Peter Pacheco (2011.), An Introduction to Parallel Programming, Elsevier
Ruud van der Pas, Eric Stotzer, Christian Terboven (2017.), Using OpenMP -- The Next Step, MIT Press
David R. Kaeli, Perhaad Mistry, Dana Schaa, Dong Ping Zhang (2015.), Heterogeneous Computing with OpenCL 2.0, Morgan Kaufmann
Barbara Chapman, Gabriele Jost, Ruud Van Der Pas (2007.), Using OpenMP, MIT Press
For students
General
ID 223093
Winter semester
5 ECTS
L0 English Level
L1 e-Learning
45 Lectures
0 Seminar
0 Exercises
15 Laboratory exercises
0 Project laboratory
0 Physical education excercises
Grading System
90 Excellent
75 Very Good
65 Good
50 Sufficient