Information Theory

Information Theory

Course objectives:

This course will cover the basic questions of information theory as
well as applications of information theory in computer science. It
will cover the following topics:

  • Shannon theory:

    • – Data compression
    • – Channel coding
    • – Applications of entropies in computer science
    • Error correcting codes:

      • – Linear codes
      • – Reed-Solomon codes
      • – Capacity achieving codes with efficient encoding and decoding


    Familiarity with basic probability and linear algebra. The necessary background for finite fields will be introduced.


    1. Cover and Thomas, Elements of Information Theory
    2. Guruswami, Rudra and Sudan, Essential Coding Theory

    See the course website, and the webpage for the TDs.

Optimisation and Approximation

Optimisation and Approximation

The goal of this course is to present optimization techniques with a main focus on linear relaxation. Applications to approximation algorithms will be highlighted.

Contents of the course.

  1. Linear Programming.
    • Polytopes.
    • Simplex algorithm.
    • Duality and complementary slackness.
    • Sensitivity analysis.
    • Integer polytopes (flows, transportation, totally unimodular matrices).
    • Integrality gap, Erdos-Posa property.
    • VC-dimension.
  2. Approximation via linear relaxation.
    • Primal-Dual algorithms.
    • Branch and bound.
  3. Non linear optimization.
    • Semidefinite programming.
    • Convex optimization.


  • Understanding and Using Linear Programming, Matousek and Gartner (2006).
  • Linear Programming, Chvatal (1983).

  • Applied Mathematical Programming, Bradley, Hax, and Magnanti (1977).
  • The Design of Approximation Algorithms David P. Williamson and David B. Shmoys (2010).

Prerequisites for the course.
Students should know the basics from Calculus and Linear Algebra.

The webpage of the course.

Compilation and Program Analysis

Compilation and Program Analysis

Contents of the course.

This course covers basic topics in compilation and program analysis: the differents steps of machine language production, as well as program analysis for optimisation and verification:

  • lexical and syntactic analysis;
  • typing, semantics;
  • code optimisation and code generation;
  • imperative languages, functional langages;
  • abstract interpretation.

Prerequisites : familiarity with formal languages (regular expressions, grammars), and induction reasoning, as well as basic notions of computer architecture and in Python programming.

The course has 1 course session (2hours) and 1 lab session (2 hours) per week during the first semester of M1. Students will develop their own compiler as well as a basic abstract interpretation tool.

Page for 2018-19 :

Parallel and Distributed Algorithms and Programs (PDAP/APPD)

Contents of the course.

Parallelism is everywhere in computer science, since nowadays, each processor
has multiple cores. In this course, we will teach how to design efficient
parallel and distributed algorithms, and how to implement
them through lab sessions. Classes discuss theoretical models
used for the design and analysis and study of algorithms (complexity,
algorithm definition and analysis, approximation algorithms, …).
There will also be tutorials and lab sessions to understand the concepts
seen in class, and in particular there will be an initiation to
parallel and distributed programs through MPI (Message Passing Interface).
The evaluation is done through on-table exams and a programming homework.

The covered topics include sorting networks, PRAMs, algorithms on processor
rings or grids, distributed algorithms, and task graph scheduling.


– Parallel Algorithms. H. Casanova, A. Legrand, Y. Robert. Chapman and
Hall/CRC Press, 2008.

– Introduction to Distributed Algorithms. Gerard Tel. Second Edition,
Cambridge University Press, 2000.

– Distributed Algorithms: An Intuitive Approach. Wan Fokkink. MIT
Press, 2013.

– Scheduling and Automatic Parallelization. A. Darte, Y. Robert, F.
Vivien. Birkhäuser, 2000.


Basic knowledge in algorithm design and NP-completeness, and programming skills.

Here is a webpage for the course (2018 edition).

Performance Evaluation and Networks

Performance Evaluation and Networks

Course focused on the performance evaluation in communication networks. “Performance evaluation” is a label that covers many topics, including the design of mathematical models (very often with probabilistic assumptions), the analysis of such models (mathematically or using simulation), the design of experimental protocols (measuring tools, data collection, validation of theoretical models), the analysis of data (statistics on the real experiments or in silico experiments).
This course will address these different items in the framework of communication networks. Some illustrations will be also picked from related systems like transport networks, logistics or automatic control.

References :

  • Performance Evaluation of Computer and Communication Systems. J.-Y. Le
    Boudec. EPFL Press, 2011.
  • Markov Chains, Gibbs Fields, Monte Carlo Simulation and Queues. P.
    Brémaud. Springer-Verlag, 1999.

Prerequisites : Requires basic knowledge in probability. Some knowledge about networks may be useful but not essential.

Current webpage :

OLD – M1 research internship, evaluation

[ this translation of the french version of this page is based on work by Raphaël Monat: merci beaucoup! ]

Get quickly in touch with D. Hirschkoff (or your tutor) if you have difficulties during your intenship.

A general remark: you should not suppose that people reading your report/attending your presentation are specialists of the field you worked on during your internship. This should not prevent you from providing technical details, but the latter should be explained. Do not focus on the technical material only. Experience shows that M1 students tend to give too much emphasis on the technique, as if they were talking to their internship supervisor.

1 Report


20 pages at most. If needed, you include an online appendix, containing proofs, numerical results, an article you wrote with your supervisor, some code…

It is highly recommended that your supervisor proofreads your report (this means that you need to finish writing your draft early, to have time for proofreading and corrections).

Concerning the content:

  • Introduce the domain

You need to explain the question you have studied, and the context/reasons of your study.
– Where does the problem you studied come from?
– Why is this question interesting/relevant?
– What was the state-of-the-art at the beginning of your internship?
– Be aware that you need to present relevent related work.

  • Present your work

– If you have 10 results, you will not have the space to present everything (or it will be compressed and really difficult to understand). You need to select what you will present.
– If you don’t have any result, you can write about what you tried and why it failed, and how the difficulties were handled, together with your supervisor, along the internship.

  • Comment on your approach

It is also interesting to know how your research went:

– What did not work?

– Your interactions with your supervisor

– Your collaborations with other members of the lab

– The initiatives you have taken
This shouldn’t be a signficant part of your report/presentation, but it will help in assessing the quality of your work.

  • Experimental results (if applicable)

If you have developped non-trivial programs or used some tools, mention them!

– Size of the code
– Tools/languages used
– Expected use of the tool(s)
– Implementation choices

2 Presentation

  • Format: 20 minutes of presentation, 5 minutes of questions.

Rehearse your presentations at least twice! You will be interrupted if you exceed the given presentation time.

Your presentation will be a disaster if you’re interrupted during part 2 / 5 of your presentation. Conversely, if you finish your presentation in 8 minutes, it will also be a disaster.

Put pressure on yourself! Preparing a nice and crystal-clear presentation is a really difficult exercise, needing a lot of preparation.

– You should avoid mentionning too many technical details, you will loose your audience this way. But you need to strike a balance and still mention technical points.

– (Similar to the report) You need to present the question you worked on, and your contributions. Non specialists (i.e, other researchers from the LIP, or other students) should be able to follow your presentation.

Vous pouvez présenter en français ou en anglais, mais ne soyez pas prétentieux, n’optez pour l’anglais que si votre niveau le permet.

3 Technical points

Have a backup of your presentation on a USB stick, in case of technical problems between your computer and the projector.
– You need to have a machine to project your slides. You can share a computer for all presentations of a given session. It is highly recommended to test your projection skills before the presentation day.
– If you are using Beamer, `pdfpc’ is an interesting tool to project your slides
– Use git to version your documents/work. This should prevent ay loss of your work.

PROG – Programming Languages Theory (1st semester, 6 ECTS)

Cours : Daniel Hirschkoff

TD : Adrien Durier & Alexis Ghyselen & Julien Braine

Programming languages and programs are usually not built as mathematical entities, but rather as technological objets (whose properties can be described as rigorously as possible, in manuals or standards).

This course is about the semantics of programming languages, which provides mathematical tools that make it possible

  • to describe and analyse the statics of programs: what are the constructs made available by the language, what can be read on the code (about the usage of functions, the managment of resources, the risk of errors like division by zero or getting into an infinite loop);
  • to do the same for the dynamics of programs: how a program is supposed to run, what resources are involved at execution time, etc.
  • to discuss translations from a language to another (which is called compilation), and, more generally, to describe programs that manipulate programs.


  • G. Winskel, The semantics of programming languages, MIT Press.
  • J. Reynolds, Theories of Programming Languages, CUP.
  • a web page dedicated to the course will be made available from

The course consists in 2 hours of lecture per week, together with 2 hours of exercices (on machine, or on paper).

Le cours est organisé selon le schéma hebdomadaire suivant: 2h de cours; 2h de travaux pratiques sur machine; des devoirs à la maison à faire en binôme, un partiel, un examen.

Computer algebra

Contents of the course

  • Gcd and extended Gcd : Euclidean algorithm, Extended Euclidean Algorithm (complexity analysis, properties), quasi-linear gcd (Knuth-Schonhage).
  • Algorithms for polynomials : evaluation, interpolation. Product tree. Quasi-linear algorithms for multiple point evaluation and interpolation.
  • Algorithms for linear algebra : Gauss pivoting (over a field), applications (image, kernel, determinant, linear system solving) ; multimodular methods and Hensel lifting over Z and K[X].
  • Elimination theory and resultant. Applications to the computation of theintersection of two plane algebraic curves.
  • Polynomial factoring over Z/pZ : Berlekamp algorithm, Cantor-Zassenhaus algorithm, Hensel lifting over Z/p^k Z.
  • Application to Error-Correcting Codes : Reed-Solomon codes. Decoding algorithms, list decoding algorithms.
  • (… to be completed)

See the french version of this page for informations about the organisation of this course.

ER03: Constraint satisfaction problems

January 21-25, 2013, ENS Lyon.

Speakers: Manuel Bodirsky, Michael Pinsker.

A Constraint Satisfaction Problem (CSP) is a computational problem where the task is to decide for a given finite set of variables and constraints whether there is an assignment to the variables that satisfies all constraints. Many computational problems in various areas of theoretical computer science can be modeled in this way.

If the variables take their values in some finite domain, there is a rich universal-algebraic theory that describes those constraint languages whose CSP is NP-hard, and those whose CSP is in P. In this course, we present a powerful generalization of this approach to CSPs where the underlying domain is infinite.
After introducing the necessary background in model theory, the course will also cover recent applications of structural Ramsey theory that have led to complexity classification results for large classes of CSPs.

Outline of the course

Monday: Examples of CSPs from Artificial Intelligence. Primitive positive formulas and NP-hardness proofs. Some efficient algorithms. The Feder-Vardi conjecture.

Tuesday: The universal-algebraic approach. The Inv-Pol Galois correspondence. The core of a structure. The tractability conjecture. Cyclic operations. Exercices.

Wednesday: CSPs on infinite domains. Fraïssé limits, the Ryll-Nardzewski theorem. The Inv-Pol correspondence on infinite domains. The topology of simple convergence.  Henson digraphs and other examples.

Thirsday: Introduction to Ramsey theory. Canonical functions. The classification of Graph-SAT problems.

Friday: Open problems, proposals of research projects. Exam and discussion.

Local contact: Pascal Koiran


Registration is free, but there is a limit on the number of participants and it does include neither housing nor meals (though for lunch the attendees will be granted access to the student cafeteria). Registration should be made before Monday, December 3, by clicking on this link and filling in and sending the e-mail form. You shall receive a confirmation as soon as possible.

Alternatively, you can copy/paste and fill the following form and send it by e-mail to, with subject “Registration form — research school 3”

First Name:
Last Name:
Position (MSc student, PhD student, researcher, etc.):
E-mail address:

wishes to attend the research school ‘Constraint satisfaction problems’, taking place at ENS Lyon, from Jan. 21 to Jan. 25.

ER02: Semantics and tools for low-level concurrent programming

January 14-18, 2013, ENS Lyon.


Francesco Zappa Nardelli (INRIA), Mark Batty (University of Cambridge), Alastair Donaldson(Imperial College London), and Martin Vechev (ETH Zurich)


Available from here


Optimisations performed by the hardware or by high-level language compilers can reorder memory accesses in various subtle ways. These reorderings can sometimes be observed by concurrent programs, exacerbating the usual problems of concurrent programming. The situation is even worse when we abandon traditional shared memory concurrency in favor of graphics processing units (GPUs) or exotic hardware. In these lectures we will cover recent progress in understanding and specifying the behaviour of some major processor architectures and the impact of optimisations performed by high-level programming language compilers, stablishing a solid basis for shared memory programming. We will then explore techniques which can help programmers develop reliable concurrent software by automatically discovering the necessary synchronization. We will also discuss techniques for verification of multicore software, including GPU kernels.


  • Monday:
    • 10h30-12h30: Welcome + introduction
    • 14h: Batty: x86TSO
    • 15h30-16h30: TD: x86TSO
  • Tuesday
    • 9h-10h30: Batty + Zappa: ARM & Power
    • 11h-12h30: Zappa : Compiler Optimisations + DRF
    • 14h-15h: TD: ARM & Power
    • 15h30-16h30: TD: Compiler Optimisations + DRF
  • Wednesday
    • 9h-10h30: Batty : C11
    • 11h-12h30: Vechev (1)
    • 14h-15h: TD : C11
    • 15h30-16h30: TD : Vechev
  • Thursday
    • 9h-10h30: Vechev (2)
    • 11h-12h30: Alastair (1)
  • Friday
    • 9h-10h30: Alastair (2)
    • 11h-12h30: TD: Alastair
    • 14h-15h30: Mini-exam

For the practical sessions

Please bring your laptop with you. In case you do not have one, please get in contact with Filippo Bonchi (

Contact: Filippo Bonchi.


Registration is free, but there is a limit on the number of participants and it does include neither housing nor meals (though for lunch the attendees will be granted access to the student cafeteria). Registration should be made before Monday, January 8, by clicking on this link and filling in and sending the e-mail form. You shall receive a confirmation as soon as possible.

Alternatively, you can copy/paste and fill the following form and send it by e-mail to, with subject “Registration form — research school 2”

First Name:

Last Name:


Position (MSc student, PhD student, researcher, etc.):

E-mail address:

wishes to attend the research school ‘Semantics and tools for low-level concurrent programming’, taking place at ENS Lyon, from Jan. 14 to Jan. 18.

ER01: Fault tolerance for high performance computing

George Bosilca et Thomas Hérault,

University of Tennessee Knoxville

December 10-14, 2012, ENS Lyon.

During this course, we will discuss fault tolerance techniques for high performance computing systems.


In June 2008, the LANL’s Road Runner computing system was the first to  cross the hallmark of one Petaflop based on the Linpack benchmark (reaching $1.026 \cdot 10^{15}$ double precision floating point operation per second). Today, only 4 years later, the fastest computing system, the Sequoia BlueGene/Q at LLNL, sustains 16.324 Petaflops. Continuing on the same path, it is expected that as early as 2022, computing systems will conquer the 1 Exaflop milestone ($10^18$ flops). After the multicore revolution of 2000, such performance became achievable only by multiplying the number of computing components (cores) in the parallel machine. When the 2008 Road Runner machine had 122,400 cores, distributed over 20,000 heterogeneous nodes, today’s Sequoia systems exhibits no less than 1,572,864 cores, distributed over 100,000 nodes. The drawback of such exponential increase in the number of components is a disruptive decrease in the reliability of the entire

system. Indeed, the Mean Time To Interruption, defined as the average duration for which all the components of the platform behave as specified, is inversely proportional to the number of components in this platform. When in 2009, the MTTI of the largest supercomputers was estimated close to 4 days, it sharply decrease below a single day (19h04 min) in 2011, and projected to continue its descent in the following years (less than 4h by 2015, less than 2h by 2018). While technical breakthrough will hopefully allow the MTTI to remain in a tolerable area, long running applications need an execution environment more flexible, an environment allowing them to survive for longer period of time either algorithmically or by taking advantage of specialized capabilities of the programming model. Unfortunately, current programming paradigms do not account for such a hardware instability; the failure of a single component (be it memory or computing unit, hard

drive or network card) has a drastic and lasting effect on the computation. Even without accounting for the implicit energy costs, in front of the evolving expectations for the next generation computing platforms MTTI, such approaches cannot reasonably lead to sustainable software solutions.


This course will highlight the existing solutions, and present parallel algorithms designed to provide correct solutions despite the presence of failures. We will show how the programming environment can be modified to increase the system reliability and sustainability. The course will start with an introduction to High Performance Programming, its challenges and typical algorithms.

Then we present theoretical as well as practical studies of the many fault-tolerant approaches that have been proposed over the years.


The course assumes a general knowledge of algorithms and operating system, as well as a basic understanding of machine architecture and network systems.

Update: bring your laptop! Some exercises on machine will be proposed along the course. Attendants are expected to bring a laptop with a possibility to establish a connection to a distant machine (typically, ssh): please write to the local contact (Yves Robert) if this is not possible for you — we should be able to lend a few laptops to participants.

Local contact: Yves Robert.


Registration is free, but there is a limit on the number of participants and it does include neither housing nor meals (though for lunch the attendees will be granted access to the student cafeteria). Registration should be made before Monday, December 3, by clicking on this link and filling in and sending the e-mail form. You shall receive a confirmation as soon as possible.

Alternatively, you can copy/paste and fill the following form and send it by e-mail to, with subject “Registration form — research school 1”


First Name:

Last Name:


Position (MSc student, PhD student, researcher, etc.):

E-mail address:

wishes to attend the research school ‘Fault tolerance for high performance computing’, taking place at ENS Lyon, from Dec. 10 to Dec. 14

Computational Geometry and Digital Images

Contents of the course

The objective of this course is to introduce fundamental notions of image processing, digital geometry, computational geometry and computer graphics. The first lectures will be dedicated to image processing (filtering, smoothing, morphological mathematics, …) and shape representation. Then, we will focus on theoretical and algorithmic issues involved in the analysis of digital shapes. During this analysis of digital geometry processing tools, we will have to present and integrate some tools from various fields: discrete mathematics, combinatorics, arithmetic, computational geometry… The lecture ends with some basic knowledge on computer graphics, geometry processing and rendering.

Table of contents

  • Image/Shape representation
  • Image processing (filtering, segmentation)
  • Digital Geometry for shape analysis
  • Computational geometry and data structures
  • Computer Graphics / Rendering


basic notions of algorithms


  • « Géométrie discrète et images numériques », D. Coeurjolly, A. Montanvert and J.-M. Chassery, Ouvrage collectif, Traité IC2, Hermès, 416 pages, 2007
  • « Digital Geometry: Geometric Methods for Digital Picture Analysis », Reihnard Klette, Azriel Rosenfeld, Morgan Kaufmann, 2004
  • « Computational Geometry: Algorithms and Applications », Mark de Berg, TU Eindhoven (the Netherlands) Otfried Cheong, KAIST (Korea),Marc van Kreveld, Mark Overmars, Utrecht University (the Netherlands), Springer-Verlag

Cryptography and Security

Course contents:

Cryptography aims at securing communications against malicious parties.  This field enjoys numerous links with theoretical computer science (complexity theory, security proofs) but has also a very rich practical counterpart: Cryptographic protocols are part of every-day life (electronic commerce, payment cards, electronic voting, etc).
This course is an introduction to the different facets of modern cryptography. The following topics will be addressed:

  • Symmetric encryption
  • Asymmetric encryption
  • Cryptographic hashing
  • Authentication
  • Pseudo-random number generators
  • Zero-knowledge proofs
  • Public-key infrastructure
  • Cryptanalysis
  • Provable security
  • Secret sharing

We will also describe several practical applications, such as PGP, TLS/SSL and electronic voting.

Evaluation sheet.

Teaching staff

Program analysis and systems verification

Contents of the course

  • Ordered structures and Topologies. Scott domains. Denotational and Axiomatic semantics. Abstract interpretation.
  • Principles of model-checking. kripke structures and Büchi automata. Temporal logics (LTL, CTL, …).

Practical matters related to the organisation of the course will be presented at the first course.

People in charge:

ER 04 (2011) : Separation logics and applications

Separation logic is an extension of Floyd-Hoare logic that makes it possible to reason about programs manipulating pointers. Since its introduction at the beginning of this millenary, it has led to several developments, which include tools for the automatic verification of properties of pointer manipulating programs, as well as reasoning about concurrent programs in a shared memory scenario.

This research school will take place in the week Jan.31st-Feb. 4, 2011.

Courses will be taught in english.


HYang, intro, sep-logic, hiding, automation

AGotsman, concurrency, csl, lockfree, scheduler

MParkinson,abstract predicates, rely-guarantee, RGSep, deny-guarantee

Code: reverse.c, create.c, dispose.c, push.c, pop.c

The lecture notes on Separation Logic, written by J. Reynolds and mentioned by H. Yang in his lecture, are available here.

The prerequisite to follow the course is a basic knowledge of first-order logic.

Tool exercises: we suggest that attendees bring a laptop for the sessions devoted to tool exercises. Exercises will be made using the Verifast Program Verifier (follow the link to download and install the tool on your computer).


  • Hongseok Yang (Lecturer, Queen Mary University of London, UK)
  • Alexey Gotsman (Assistant Professor, IMDEA, Spain)
  • Matthew Parkinson (Researcher, Microsoft Research, Cambridge, UK)

Topics (tentative description)

1) Basic separation logic.

  • [a] Basic proof rules and semantics of separation logic.
  • [b] Basic automatic verification algorithms based on separation logic.

2) Logic-based approaches on information hiding and data abstraction.

  • [a] Higher-order frame rules and abstract predicates.
  • [b] Automatic verification algorithm with abstract predicates.

3) Invariant-based approaches on the verification of concurrent programs.

  • [a] Concurrent separation logic.
  • [b] Automatic verification algorithm based on concurrent separation logic.

4) Rely-Guarantee.

  • [a] RGSep: the marriage of rely-guarantee and separation logic.
  • [b] Automatic verification with RGSep.
  • [c] Reasoning about liveness properties.
  • [d] Deny Guarantee.

Tentative schedule for the week. [L] means a lecture, and [E] means an exercise session.

Mon (5 hours)

  • [L] Introduction and basic semantics (HY)     10h30
  • [E] Introduction and basic semantics (HY)
  • [E] Concurrent programming (AG)      14h00
  • [L] Basic separation logic (HY)
  • [E] Basic separation logic (HY)

Tue (5 hours)

  • [L] Abstract predicate (MP)     9h00
  • [E] Abstract predicate (MP)
  • [E] Tool exercise 1 (AG)
  • [L] Information hiding (HY)   14h00
  • [E] Information hiding (HY)

Wed (7 hours)

  • [L] Concurrent separation logic (AG)     9h00
  • [E] Concurrent separation logic (AG)
  • [E] Tool exercise 2 (AG)
  • [L] RGSep (MP)              13h30
  • [E] RGSep (MP)
  • [L] Automation (HY)     16h00
  • [E] Automation (HY)

Thu (4 hours)

  • [L] Soundness of concurrent separation logic (AG)        9h00
  • [L] Deny/Guarantee (MP)
  • [E] Deny/Guarantee (MP)
  • [L] ..

Fri (3+1 hours)

  • [L] Liveness (AG)          9h00
  • [E] Liveness (AG)
  • [L] Concurrent abstract predicate (MP)
  • Exam     (end: 13h00)

Please contact Daniel Hirschkoff if you have any question regarding this course.