Contents of the course.

Parallelism is everywhere in computer science, since nowadays, each processor
has multiple cores. In this course, we will teach how to design efficient
parallel and distributed algorithms, and how to implement
them through lab sessions. Classes discuss theoretical models
used for the design and analysis and study of algorithms (complexity,
algorithm definition and analysis, approximation algorithms, …).
There will also be tutorials and lab sessions to understand the concepts
seen in class, and in particular there will be an initiation to
parallel and distributed programs through MPI (Message Passing Interface).
The evaluation is done through on-table exams and a programming homework.

The covered topics include sorting networks, PRAMs, algorithms on processor
rings or grids, distributed algorithms, and task graph scheduling.

References.

– Parallel Algorithms. H. Casanova, A. Legrand, Y. Robert. Chapman and
Hall/CRC Press, 2008.

– Introduction to Distributed Algorithms. Gerard Tel. Second Edition,
Cambridge University Press, 2000.

– Distributed Algorithms: An Intuitive Approach. Wan Fokkink. MIT
Press, 2013.

– Scheduling and Automatic Parallelization. A. Darte, Y. Robert, F.
Vivien. Birkhäuser, 2000.

Prerequisites.

Basic knowledge in algorithm design and NP-completeness, and programming skills.