Parallel Programming

Course Description

The course points to the need for algorithms that can take advantage of the increasing availability of parallel computational power. It describes the principles of parallel algorithm development and parallel programming models. The course also presents parallel programming utilities and enables the students to utilize the presented techniques.

General Competencies

Students will have good understanding of existing parallel computer and parallel programming models. Students will be able to design and implement a parallel algorithm maintaining the desired quality properties. Students will have practical knowledge of basic programming tools for parallel program design and quantitative performance analysis.

Learning Outcomes

  1. describe parallel computation and parallel programming models
  2. describe PRAM computer model
  3. apply PRAM programming model in parallel programming
  4. apply MPI technology in parallel program development
  5. recognize phases of parallel algorithm design
  6. combine parallel algorithm development elements
  7. evaluate efficiency and scalability of parallel algorithms

Forms of Teaching


Teaching is organized into two teaching cycles (15-week classes). First cycle consists of 7-week classes and midterm exam. Second cycle consists of 6-week classes and final exam.


Midterm exam. Final exam.


Consultation hours will be given at the first lecture

Programming Exercises

Programming tasks.

Grading Method

Continuous Assessment Exam
Type Threshold Percent of Grade Comment: Percent of Grade
Homeworks 0 % 12 % 0 % 0 %
Attendance 0 % 3 % 0 % 0 %
Mid Term Exam: Written 0 % 35 % 0 %
Final Exam: Written 0 % 50 %
Exam: Written 50 % 50 %
Exam: Oral 50 %

All homework assignments must be handed in (regardless of the number of points) as a condition for taking the exam

Week by Week Schedule

  1. Parallel computer models. Parallel programming paradigms. Properties of parallel programs. Sequential to parallel program conversion.
  2. MPI - Message Passing Interface standard
  3. Synchronous shared memory parallel computer model (PRAM).
  4. Sum of prefixes algorithm.
  5. Asynchronous parallel computer (APRAM). Model and program complexity.
  6. Designing parallel algorithms. Design phases.
  7. Algorithm partitioning. Communication structure definition.
  8. (Midterm)
  9. Task agglomeration. Task to processor mapping.
  10. Examples of parallel algorithms designs.
  11. Evolutionary and genetic algoritms principles. Genetic algorithm parallelization. Parallel evolutionary algorithms.
  12. Quantitative analysis of parallel algorithms. Algorithm performance definitions.
  13. Parallel algorithm scalability analysis.
  14. Development of modular parallel programs. Modular development support in MPI.
  15. (Final exam)

Study Programmes

Software Engineering and Information Systems -> Computing (Profile)

Computer Engineering -> Computing (Profile)

Computer Science -> Computing (Profile)


I. Foster (1995.), Designing and Building Parallel Programs, Addison-Wesley
J. Reif (ed.) (1993.), Synthesis of Parallel Algorithms, Morgan Kaufmann
W. Gropp, E. Lusk, A. Skjellum (1999.), Using MPI, 2nd ed., MIT Press

Grading System

L1 English Level
L1 e-Learning
30 Lecturers
0 Exercises
0 Laboratory exercises


88 Excellent
75 Very Good
63 Good
50 Acceptable