Parallelism is now ubiquitous in computers because all modern computers contain multicore processors. This lecture series focuses on the design of efficient parallel algorithms and on their actual implementation. The lectures will deal with the theoretical models used to design and analyze parallel algorithms. A number of classical algorithmic problems will be studied in details: complexity study, algorithm design and analysis, approximation algorithms, etc. The lectures will be supplemented with six exercise sessions, and six laboratory sessions. The aim of the lab sessions is to introduce students to parallel programming through message passing (MPI).


  • The theoretical model of sorting networks
  • The theoretical model of PRAMS
  • Algorithms on processor rings and processor grids: vector-matrix product, matrix-matrix product, 2D stencils, etc.
  • Taks graph scheduling, with and without communications, with and without resource constraints: complexity, approximation algorithms, and heuristics
  • Dependence analysis and automatic parallelization
  • Introduction to algorithms for GPUs

Students will be graded through a final examination and a continuous assessment. The continuous assessment will comprise both a mid-term examination and a programming homework.