# 2017-2018

Nicolas Louvet (AriC)

July 12, 2018 at 10h15, room M7-315 (3rd floor, Monod)

A compensated dot product for vectors of floating-point expansions

Some computations require more precision than the 53 bits provided by the IEEE 754 double precision arithmetic (binary64), widely available in hardware. When an arbitrarily large precision is required, multi-precision floating point libraries, such as MPFR, are particularly useful. However, MPFR is implemented using only integer arithmetic, which is not as much optimized as floating point arithmetic on modern processors. If only a few hundreds bits are needed, faster alternatives using only floating point arithmetic exist, such as the well known qd library developed by Bailey and his co-authors. This library provides both double-double and quad-double arithmetics that roughly simulate twice (106 bits) and four times (212 bits) the IEEE 754 double precision, respectively. Double-double and quad-double numbers are a special case of floating point expansions, which consists in representing a rational number as an unevaluated sum of floating point numbers. Another alternative is the use of error-free transformations and compensation techniques to simulate an increased working precision, and we used this approach in this work.

In this talk, we focus on the computation of dot products between vectors of floating point expansions. We present alternatives to the algorithms obtained using directly double-double and quad-double arithmetics from the qd library: our aim is to trade off speed against a moderate loss of accuracy. Of course, we always keep the loss of accuracy under control by providing forward error bounds for our algorithms. The algorithm we present provides a speedup of 2 compared to the qd library, while maintaining roughly the same accuracy. This speedup is obtained using only basic programming techniques, and even higher speedups can be obtained by using vector instruction sets (AVX2 or AVX512 in particular) available on modern computers.

Nicolas Brisebarre (AriC)

June 14, 2018 at 10h15, room M7-315 (3rd floor, Monod)

New results on the table maker’s dilemma for transcendental functions

On a computer, real numbers are usually represented by a finite set of numbers called floating-point numbers. When one performs an operation on these numbers, such as an evaluation by a function, one returns a floating-point number, hopefully close to the mathematical result of the operation. Ideally, the returned result should be the exact rounding of this mathematical value. If we’re only allowed a unique and fast evaluation (a constraint often met in practice), one knows how to guarantee such a quality of results for arithmetical operations like $$+, -, x, /$$ and square root but, as of today, it is still an issue when it comes to evaluate an elementary function such as cos, exp, cube root for instance. This problem, called Table Maker’s Dilemma, is actually a diophantine approximation problem. It was tackled, over the last fifteen years, by V. Lefèvre, J.M. Muller, D. Stehlé, A. Tisserand and P. Zimmermann (LIP, ÉNS Lyon and LORIA, Nancy), using tools from algorithmic number theory. Their work made it possible to partially solve this question but it remains an open problem. In this talk, I will present an ongoing joint work with Guillaume Hanrot (ÉNS Lyon, LIP, AriC) that improve on a part of the existing results.

Alice Pellet-Mary (AriC)

June 7, 2018 at 10h30, room M7-315 (3rd floor, Monod)

Approx-SVP in Ideal Lattices with Pre-processing

Finding a short non zero vector in an Euclidean lattice is a well-studied problem which has proven useful to construct many cryptographic primitives. The current best asymptotic algorithm to find a relatively short vector in an arbitrary lattice is the BKZ algorithm. This algorithm recovers a vector which is at most $$2^{n^{\alpha}}$$ times larger than the shortest non zero vector in time $$2^{n^{1-\alpha}}$$ for any $$\alpha$$ between 0 and 1.

In order to gain in efficiency, it is sometimes interesting to use structured lattices instead of general lattices. An example of such structured lattices are principal ideal lattices. One may then wonder whether, on the security front, it is easier to find short vectors in a structured lattice or not. Until 2016, there was no known algorithm which would perform better on principal ideal lattices than the BKZ algorithm (either classically or quantumly). In 2016, Cramer et al. proposed a quantum algorithm that finds a $$2^{\sqrt n}$$ approximation of the shortest non zero vector in polynomial time. However, the BKZ algorithm remained the best algorithm in the classical setting or for approximation factor smaller than $$2^{\sqrt n}$$ in the quantum setting.

In this talk, I will present an algorithm that generalizes the one of Cramer et al. and improves upon the BKZ algorithm for principal ideal lattices, both quantumly and classically. This algorithm is heuristic and non uniform (or equivalently it needs an exponential pre-processing time).

Laurent Grémy (AriC)

May 24, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Assessing the security of discrete-logarithm-based cryptography

The security of most public-key cryptography relies on the difficulty of mathematical problems, among which the discrete logarithm problem in finite fields. The number field sieve (NFS) is the most efficient known algorithm for this problem for a large range of finite fields. Since its introduction in the 90’s, NFS has evolved, both theoretically and practically. These improvements force cryptographers to reassess key-sizes of cryptosystems.

In this talk, we will first introduce the key notions of the index-calculus algorithms, the family of algorithms to which NFS belongs. Then, we will describe recent progress related to NFS. Finally, we will conclude by giving ideas of how pairing-based cryptography is assessed, based on the works of Menezes—Sarkar—Singh and Barbulescu—Duquesne.

Dan Roche (US Naval Academy and LJK, Grenoble)

May 17, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Matrix multiplication and inverse with error correction

In some situations the result of a linear algebra computation may be known to be almost correct – that is, correct except in a few positions of the output matrix. One example is in distributed computing, where some nodes in a cluster may return incorrect results; another is in integer computations where only a few entries may be large enough to require an additional prime’s worth of Chinese remaindering. We present new algorithms to use the almost-correct result of a matrix product or inverse to recover the completely-correct result, whose running time is softly-linear in the size of the matrix when the number of errors is sufficiently small. Our algorithms build on the prior work by Gasieniec et al (Algorithmica 2017) for error-correcting matrix multiplication, and we are now able to effectively handle sparsity in the inputs and outputs, as well as incorporate fast matrix multiplication so that the cost scales well even with a large number of errors. The key underlying tools come from sparse polynomial interpolation and sparse low-rank linear algebra.

Eva Darulova (MPI, Saarbrücken)

May 3, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Daisy – a framework for sound accuracy analysis and optimization of numerical programs

Floating-point or fixed-point computations are an integral part of many embedded and scientific computing applications, as are the roundoff errors they introduce. They expose an interesting tradeoff between efficiency and accuracy: the more precision we choose, the closer the results will be to the ideal real arithmetic, but the more costly the computation becomes. Unfortunately, the unintuitive and complex nature of finite-precision arithmetic makes manual optimization infeasible such that automated tool support is indispensable.

This talk presents an overview of Daisy, a framework for sound accuracy analysis and optimization of finite-precision programs. We will provide a high-level view of its main features: roundoff error analysis as well as rewriting and mixed-precision optimization.

AriC members

April 26, 2018 at 10h45, room M7-315 (3rd floor, Monod)

Back from conferences

Short talks (10’ + 5’ for questions) when people are back from a conference, on talks they attended and found interesting. We will have, by alphabetical order :

• Elena Kirshanova on «A practical cryptanalysis of Walnat» by Daniel Hart, Do Hoon Kim, Giacomo Micheli, Guillermo Pascual Perez, Christophe Petit, Yuxuan Quek; presented at PKC18;
• Fabrice Mouhartem on «Post-Quantum Security of Fiat-Shamir» by Dominique Unruh; presented at Asiacrypt 2017;
• Bruno Salvy on «Arctic curves in path models from the tangent method» by Philippe Di Francesco and Matthew F. Lapa; presented at a Workshop on “Computer Algebra in Combinatorics”, Vienna 2017;
• Wen Weiqiang on «Fully Homomorphic Encryption from the Finite Field Isomorphism Problem» by Yarkın Doröz, Jeffrey Hoffstein, Jill Pipher, Joseph H. Silverman, Berk Sunar, William Whyte, Zhenfei Zhang; presented at PKC’18.

Bruno Salvy (AriC)

March 29, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Algorithmic tools for the asymptotics of linear recurrences

The asymptotic behaviour of a solution to a linear recurrence with polynomial coefficients can be decomposed into a not-so-difficult part that depends on the recurrence but not on the initial conditions and a more difficult part depending on the initial conditions. For the first one, several algorithms exist. In general, the second part can be dealt with numerically and in a certified way rather efficiently. Diagonals of rational functions form an important class of sequences solutions of such recurrences for which more is possible. This class generalizes that of sequences of coefficients of algebraic power series, which it contains. It enjoys very special arithmetic and analytic properties. Its elements appear frequently in combinatorics, notably as multiple binomial sums, as well as in number theory, e.g., Apéry’s sequences in his proof of the irrationality of $$\zeta(3)$$. The asymptotic analysis of diagonals, in simple but frequent cases, can be performed automatically using the technology of Analytic Combinatorics in Several Variables of Pemantle and Wilson, together with modern tools of computer algebra. The talk will give an overview of the context and of recent work in this area.

Fredrik Johansson (LFANT, Bordeaux)

March 22, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Numerical integration in arbitrary-precision ball arithmetic

We discuss arbitrary-precision numerical integration with rigorous error bounds in the framework of ball arithmetic. A new implementation has recently been added to the Arb library. Rapid convergence is ensured for piecewise complex analytic integrals by use of the Petras algorithm, which combines adaptive bisection with degree-adaptive Gaussian quadrature where error bounds are determined via complex magnitudes without evaluating derivatives. The code is general, easy to use, and efficient, often outperforming existing non-rigorous software. We discuss the pros and cons of this approach and consider some applications to special functions and computational number theory.

Claude-Pierre Jeannerod (AriC)

March 8, 2018 at 13h30, room 115, Monod site

Algorithms for Matrices with Displacement Structure

The notion of displacement rank provides a uniform way to deal with various classical matrix structures (such as Toeplitz, Hankel, Vandermonde, Cauchy) and allows for various generalizations. In this talk, we will review some recent progress in the design of asymptotically fast algorithms for multiplying and inverting such matrices. In particular, for $$n$$ by $$n$$ matrices of displacement rank $$\alpha$$, we will see that the arithmetic cost of such operations can be reduced from about $$\alpha^2 n$$ (up to log factors) down to $$\alpha^{\omega-1} n$$, where $$\omega < 2.373$$ is the exponent of (dense, unstructured) matrix multiplication.

(Based on joint work with A. Bostan, C. Mouilleron, and E. Schost.)

Warwick Tucker (Uppsala University and AriC, LIP)

March 1st, 2018 at 10h15, room 116, Monod site

Validated numerics – part III

In this talk, we will continue our exposition of validated numerics, and focus on solving systems of nonlinear equations. We will state and prove the properties of the set-valued Newton operator, and show a nice variant of it (the Krawczyk operator). We will also talk about weaker methods based on the implicit function theorem that work in harder situations such as when a bifurcation of the solutions takes place. Again – the talk will be based upon concrete examples, and some small computer demo will illustrate the presented methods.

Gilles Villard (AriC)

February 22th, 2018 at 10h15, room 116, Monod site

Computation of the resultant of two generic bivariate polynomials over a field

An algorithm is presented for computing the resultant of two generic bivariate polynomials over a field $$\mathbb{K}$$.

For such $$p$$ and $$q$$ in $$\mathbb{K}[x,y]$$ of degree $$d$$ in $$x$$ and $$n$$ in $$y$$, the algorithm computes the resultant with respect to $$y$$ using $$(n^{2 – 1/\omega} d) ^{1+o(1)}$$ arithmetic operations in $$\mathbb{K}$$, where two $$n\times n$$ matrices are multiplied using $$O(n^{\omega})$$ operations. Previous algorithms from the early 1970’s required time $$(n^{2} d) ^{1+o(1)}$$.

We also discuss some extensions of the approach to the computation of characteristic polynomials of generic structured matrices and in univariate quotient algebras.

Nick Trefethen (AriC and University of Oxford)

February 15th, 2018 at 10h15, room 116, Monod site

Minimax approximation of functions

My interest in minimax approimation started with my undergraduate thesis with Garrett Birkhoff at Harvard in 1977 and continues strong with the exciting new algorithm developed with Filip, Nakatsukasa, and Beckermann. This talk will collect observations about many aspects of this problem — both real and complex, polynomial and rational.

Koen de Boer (CWI, Amsterdam)

February 8th, 2018 at 10h15, room 116, Monod site

Attacks on the AJPS Mersenne-based Cryptosystem

Aggarwal, Joux, Prakash and Santha recently introduced a new potentially quantum-safe public-key cryptosystem, and suggested that a brute-force attack is essentially optimal against it. They consider but then dismiss both Meet-in-the-Middle attacks and LLL-based attacks. Very soon after their paper appeared, Beunardeau et al. proposed a practical LLL-based technique that seemed to significantly reduce the security of the AJPS system.

We do two things. First, we show that a Meet-in-the-Middle attack can also be made to work against the AJPS system, using locality-sensitive hashing to overcome the difficulty that Aggarwal et al. saw for such attacks. We also present a quantum version of this attack. Second, we give a more precise analysis of the attack of Beunardeau et al., confirming and refining their results.

Jean-Michel Muller (AriC team)

February 1st, 2018 at 10h15, room 116, Monod site

On the relative error of computing complex square roots in floating-point arithmetic

(joint work with Claude-Pierre Jeannerod)

We study the accuracy of a classical approach to computing complex square-roots in floating-point arithmetic.
Our analyses are done in binary floating-point arithmetic in precision $$p$$, and we assume that the (real) arithmetic operations $$+$$, $$-$$, $$\times$$, $$\div$$, $$\sqrt{ }$$ are rounded to nearest, so the unit roundoff is $$u = 2^{-p}$$.
We show that in the absence of underflow and overflow, the componentwise and normwise relative errors of this approach are at most $$\frac{7}{2}u$$ and $$\frac{\sqrt{37}}{2}u$$, respectively, and this without having to neglect terms of higher order in $$u$$. We then provide some input examples showing that these bounds are reasonably sharp for the three basic binary interchange formats (binary32, binary64, and binary128) of the IEEE 754 standard for floating-point arithmetic. Finally, we consider another, slightly more accurate algorithm.

Tanguy Rivoal (Institut Fourier, CNRS and Université Grenoble Alpes)

December 21, 2017 at 10h15, room 116, Monod site

Exceptional algebraic values of E-functions

E-functions are power series with algebraic Taylor coefficients at the origin (satisfying certain growth conditions) and solutions of linear differential equations with polynomial coefficients. Siegel defined them in 1929 to generalize the diophantine properties of the exponential function, which takes transcendental values at any non-zero algebraic point. The situation is more complicated in general, as an E-function may sometimes return an algebraic value when evaluated at a non-zero algebraic point. In this talk, I will first explain the classical Diophantine results for E-functions. I will then present an algorithm which, given an E-function $$F(z)$$ as input, outputs the finite list of algebraic numbers $$A$$ such that $$F(A)$$ is algebraic.

This is a joint work with Boris Adamczewski (CNRS et Université Lyon 1).

Alexandre Wallet (AriC, LIP)

December 14, 2017 at 10h15, room 116, Monod site

On variants of Ring-LWE and Polynomial-LWE

Learning With Errors is a popular security assumption for post-quantum cryptographic schemes: informally, an integer solution of a “noisy” linear system must be found. It is based on hard problems on Euclidean lattices, and it allows for a lot of functionalities in cryptography. Structured variants based on algebraic number theory have been introduced to improve time and memory efficiency. However, the addition of new mathematical objects such as number fields/rings has led to several possible security assumptions, and the link between them is not always clear.

I will present recent results toward the clarification of the situation. For a large familyof number fields, we showed that all versions of the problem are equivalent (up to polynomial increases in the noise, and up to non-uniformity of some of the underlying reductions). Our reductions rely on mathematical tools, some of them being new in the Ring-based LWE context, such as perturbation techniques for polynomials.

Based on a joint work with Miruna Rosca and Damien Stehlé.

LIP day on “Calcul”

Novembre 30, 2017

See the web page dedicated to this day.

Warwick Tucker (Uppsala University and AriC, LIP)

November 23, 2017 at 10h15, room 116, Monod site

Validated numerics – part II

In this talk, we will cover the fundamentals of validated numerics, namely the inclusion principle and its consequences for mathematical proofs. In particular, we will talk about the set-valued Newton operator, and how it comes into play in many computer-assisted proofs. Again – the talk will be based upon concrete examples, and some small computer demo will illustrate the presented methods.

Nick Trefethen (AriC and University of Oxford)

Thursday November 16, 2017 at 10h15, room 116, Monod site

Numerical Computation with Rational Functions

I will begin with a review of rational functions in numerical computation, including a review of twelve applications where they are useful and of two famous problems of rational approximation theory. Then we find ourselves with an “inverse Yogiism”. Although it is mathematically correct that a rational function is a quotient $$r = p/q$$ of two polynomials, representations of this form are numerically unstable in precisely the cases where rational functions should be most powerful. A better alternative is a barycentric representation $$r = n/d$$ in which $$n$$ and $$d$$ are partial fraction sums, and this leads to the exciting new “AAA algorithm”, which we will explain and demonstrate. This is joint work with Silviu Filip, Yuji Nakatsukasa, and Olvier Séte.

Paola Boito, Elena Kirshanova, Benoît Libert, and Nathalie Revol (AriC, LIP)

Thursday November 9, 2017 at 10h15, room 116, Monod site

Back from Conferences

Short talks (10’ + 5’ for questions) when people are back from a conference, on talks they attended and found interesting. We will have, by alphabetical order :

• Paola Boito on «Solving large scale Lyapunov and Sylvester equations with quasiseparable coefficients» by Stefano Massei at Structured Matrices in Numerical Linear Algebra;Abstract: Let us consider a Sylvester matrix equation $$AX+XB=C$$ where the coefficients $$A$$, $$B$$ and $$C$$ are matrices with low-rank off-diagonal blocks. What can be said about the rank structure of the solution $$X$$? It turns out that, under suitable hypotheses, $$X$$ inherits a rank structure as well. The main idea of the proof relies on a recent result by Beckermann and Townsend on displacement structured matrices.
• Elena Kirshanova on «Grover Meets Simon – Quantumly Attacking the FX-construction» by Gregor Leander and Alexander May,to be presented at AsiaCrypt’17;
• Benoît Libert on «Asymptotically Compact Adaptively Secure Lattice IBEs and Verifiable Random Functions via Generalized Partitioning Techniques» by Shota Yamada at IACR-CRYPTO-2017;
• Nathalie Revol on «Utah Floating-Point Toolset» by Zvonimir Rakamaric at Dagstuhl Seminar 17352 – Analysis and Synthesis of Floating-point Programs.

Rencontres “Arithmétique de l’Informatique Mathématique”

October 24-26, 2017

Monthly Lattice Meeting

October 19-20, 2017 in room 115, Monod site

Nick Trefethen (University of Oxford and AriC, LIP)

Thursday October 19, 2017 at 10h15, room 116, Monod site

Chebyshev Series

Chebyshev series are the basic tool for working with functions on a real interval — the equivalent of Fourier series but without the assumption of periodicity. They are also the basis of the software system Chebfun. We review the properties of Chebyshev series, using Chebfun to illustrate them, and then discuss the surprisingly challenging problem of “chopping a Chebyshev series” to obtain a representation to a prescribed precision, which is an analogue for functions of the floating-point rounding of real numbers.
We close with a few words about Faber’s theorem of 1914 and “inverse Yogiisms”.

Warwick Tucker (Uppsala University and AriC, LIP)

Thursday October 12, 2017 at 10h15, room 116, Monod site

Validated numerics – a brief overview

In this talk, I will give a short introduction to the field of validated numerics. We will take a top-down approach, and begin by studying some natural mathematical problems that arise in the study of dynamical systems. Such problems invariably lead to some numerical challenges, and we will see how they can be addressed in the framework of validated numerics. Everything will be explained from “first principles”, so no prior knowledge of dynamical systems is needed.

Vincent Lefèvre, Jean-Michel Muller, Bruno Salvy and Damien Stehlé (AriC, LIP)

Thursday September 28, 2017 at 10h15, room 116, Monod site

Back from Conferences

Short talks (10’ + 5’ for questions) when people are back from a conference, on talks they attended and found interesting. We will have, by alphabetical order :

• Vincent Lefèvre on « Multiple-precision floating-point arithmetic on SIMD processors » by van der Hoeven at Arith;
• Jean-Michel Muller on « The rise of multi precision computations » by Higham at Arith;
• Bruno Salvy on « A universal ordinary differential equation » by Bournez & Pouly at FoCM;
• Damien Stehlé on « Gaussian Sampling over the Integers: Efficient, Generic, Constant-Time » by Micciancio & Walter at Crypto.

# 2016-2017

Thomas Prest (Thales)

Thursday May 11, 2017 at 9h45 (please note the unusual time), room 116, Monod site

Easier floating-point on a Rényi day

This talk is part of the Monthly Lattice Meetings.

Paola Boito (XLIM, U. Limoges and AriC)

Thursday April 20, 2017 at 10h15, room 116, Monod site

Efficient Solution of Parameter Dependent Quasiseparable Systems
and Computation of Meromorphic Matrix Functions

In this work we focus on the solution of shifted quasiseparable systems and of more general parameter dependent matrix equations with quasiseparable representations. We propose an efficient algorithm exploiting the invariance of the quasiseparable structure under diagonal shifting and inversion. This algorithm is applied to compute various functions of matrices. Numerical experiments show the effectiveness of the approach.
This is joint work with Luca Gemignani (Pisa) and Yuli Eidelman (Tel Aviv).

Alin Bostan (Inria Saclay)

Thursday April 13, 2017 at 10h15, room 116, Monod site

Proving algorithmically the transcendence of D-finite power series

Given a sequence represented by a linear recurrence with polynomial coefficients and sufficiently many initial terms, a natural question is whether the transcendence of its generating function can be decided algorithmically. The question is non trivial even for sequences satisfying a recurrence of first order. An algorithm due to Michael Singer is sufficient, in principle, to answer the general case. However, this algorithm suffers from too high a complexity to be effective in practice. We will present a recent method that we have used to treat a non-trivial combinatorial example. It reduces the question of transcendence to a (structured) linear algebra problem.

Laurent Grémy (Loria)

Monday April 10, 2017 at 15h00, room 115, Monod site

Computing discrete logarithms: sieving for the number field sieve

Factoring large integers and computing discrete logarithms over finite fields are two difficult problems, efficiently solved by a family of algorithms, the index calculus. The security of some cryptosystems rely on this difficulty.

Since 1982 and the use of the quadratic sieve by Pomerance to factor large integers, it is well known that an important step of these algorithms, called relation collection, can be done quickly using sieving algorithms, basically like the sieve of Eratosthenes to find primes. Among the index calculus algorithms, the number field sieve (NFS) reaches the best complexity. It requires a two-dimensional sieving. In 2005, Franke and Kleinjung described the most efficient algorithm to sieve over two dimensions.

The diversity of finite fields implies that different variants of the NFS, as the one described by Joux–Lercier–Smart–Vercauteren in 2006 or by Kim–Barbulescu in 2016, need to sieve in higher dimensions, basically from 3 to 6. In this talk, we will describe three sieving algorithms in dimension three, some computational results of their implementations and finally an algorithm to sieve in any small dimension.

Lattice meeting

Wednesday April 5 and Thursday April 6, 2017

See the Lattice meetings page.

Jean-Michel Muller (AriC)

Thursday March 23, 2017 at 10h15, room 116, Monod site

Tight and rigorous error bounds for basic building blocks of double-word arithmetic

(Joint work with Mioara Joldes and Valentina Popescu)

We analyze several classical basic building blocks of double-word arithmetic (frequently called “double-double arithmetic” in the literature): the addition of a double-word number and a floating-point number, the addition of two double-word numbers, the multiplication of a double-word number by a floating-point number, the multiplication of two double-word numbers, the division of a double-word number by a floating-point number, and the division of two double-word numbers. For multiplication and division we get better relative error bounds than the ones previously published. For addition of two double-word numbers, we show that the previously published bound was incorrect, and we provide a new relative error bound. We introduce new algorithms for division. We also give examples that illustrate the tightness of our bounds.

Friday February 3, 2017 at 10h15, room 116, Monod site

Sigma Bases, Kernel Bases and their use in fast computation for polynomial matrices

In this talk we discuss sigma bases and kernel bases of polynomial matrices and illustrate their role in fast algorithms for arithmetic with polynomial matrices. In particular we focus on the use of kernel bases for triangulation, computing determinants and Hermite normal form computation.

Joint work with Vincent Neiger (Lyon and TU Denmark) and Wei Zhou (Waterloo)

Marc Mezzarobba (CNRS, UPMC)

Thursday January 26, 2017 at 10h15, room 116, Monod site

Rigorous Multiple-Precision Evaluation of D-Finite Functions in SageMath

We present an implementation, on top of the SageMath computer algebra system, of a collection of tools for computing numerical values of D-finite functions. Recall that a complex analytic function is D-finite when it satisfies an ordinary differential equation whose coefficients are polynomial in the independent variable. D-finite functions can be viewed as a class of special functions analogous to those of algebraic or hypergeometric functions, but more general. They come up in areas such as analytic combinatorics and mathematical physics, and lend themselves well to symbolic manipulation by computer algebra systems.

The main task our package performs is the “numerical analytic continuation” of D-finite functions. This process results in a numerical approximation of the transition matrix that maps “initial values” of an ODE somewhere on the complex plane to “initial values” elsewhere that define the same solution. Numerical analytic continuation can then be used to compute things like values or polynomial approximations of D-finite functions anywhere on their Riemann surfaces, and monodromy matrices of differential operators. The code supports the important limit case where the (generalized) initial values are provided at regular singular points of the ODE, making it possible in particular to compute connection constants between regular singularities. It is rigorous in the sense that it returns interval results that are guaranteed to enclose the exact mathematical result.

The new package can be considered a successor of NumGfun, a Maple package by the same author with broadly similar features.

Brice Minaud (RHUL)

Thursday January 19, 2017 at 10h15, room 116, Monod site

Cryptanalysis of the CLT15 Multilinear Map over the Integers

Multilinear maps are a powerful cryptographic primitive: a wide array of cryptographic schemes can be built from multilinear maps, including some for which there is no other known construction, such as non-interactive multipartite key exchange, or general program obfuscation. However to this day there is no provable construction of multilinear maps. Moreover both of the original constructions (proposed in 2013 by Garg, Gentry, and Halevi, and by Coron, Lepoint, and Tibouchi) have been broken for their most direct application, namely non-interactive multipartite key exchange. This has led Coron, Lepoint and Tibouchi to introduce an improved variant of their original construction at Crypto 2015. We propose a polynomial attack on this new construction, which fully breaks the scheme. The presentation will of course begin with a general introduction to multilinear maps and the CLT construction, which aims to be accessible to all; and will conclude with a brief overview of the current state of the topic.

Anastasia Volkova (UPMC Paris 6, LIP6)

Thursday January 5, 2017 at 10h15, room 116, Monod site

Towards reliable implementation of digital filters

Linear digital filters are frequently used in digital signal processing. We are interested in implementation of such algorithms in Fixed-Point Arithmetic. However, on each step of an implementation various issues arise. We propose an automatised filter-to-code generator which encompasses the filter implementation flow and provides a consistent error-analysis of involved algorithms.
In this talk we make an overview of the filter-to-code generator and focus on determining the Fixed-Point formats for an implementation that is optimal with respect to a certain criteria. We consider Linear Time Invariant filters in state-space representation. The computational errors in the intermediate steps of the filter evaluation as well as their accumulation over time are fully taken into account. Our approach is fully rigorous in the way that the output Fixed-Point formats are shown to be free of overflows and that we do not use any (non-exhaustive) filter simulation steps but proofs.

Crypto and Lattices Days

Thursday December 15, 2016 at 10h15, room 116, Monod site

TBA

Cf. the Web page of the Crypto and Lattices Days

Crypto and Lattices Days

Thursday November 24, 2016 at 10h15, room 116, Monod site

TBA

Cf. the Web page of the Crypto and Lattices Days

Paola Boito (U. Limoges and LIP, AriC team)

Thursday October 6, 2016 at 10h15, room 116, Monod site

Quasiseparable structure and polynomial rootfinding

Computing the roots of a univariate polynomial is a classical task in computer algebra and numerical analysis and it is often solved via a companion matrix approach. But can we exploit the matrix structures that arise in this problem to devise fast eigensolvers and rootfinding algorithms, without losing accuracy?

Motivated by this problem, we

• review some useful rank structures of Frobenius and Chebyshev companion matrices and pencils,
• introduce the quasiseparable representation for such matrices and pencils,
• show how to adapt suitable versions of the Francis/Kublanovskaya QR algorithm to the quasi separable representation, so that the arithmetic complexity of each iteration is linear in the size of the matrix,
• comment on numerical results, applications, and perspectives for further work.

This is joint work with Luca Gemignani (University of Pisa) and Yuli Eidelman (University of Tel Aviv).

Serge Torres (LIP, AriC team)

Thursday September 22, 2016 2:30 PM, in lecture theater B, at the 3d floor of the GN1 building, Monod Campus.

PhD defense: Tools for the Design of Reliable and Efficient Function Evaluation Libraries

Can you tell me more about that?

The design of function evaluation libraries is a complex task that requires great care and dedication, especially when one wants to satisfy high standards of reliability and performance. In practice, it cannot be correctly performed, as a routine operation, without tools that not only help the designer to find his way in a complex and extended solution space but also guarantee that his solutions are correct and (possibly) optimal.

As of the present state of the art, one has to think in terms of “toolbox” from which he has to smartly mix-and-match the utensils that better fit his goals rather than expect to have at hand a solve-all automatic device.

What is it that you have done?

The work presented here is dedicated to the design and implementation of such specialized tools in two realms:

• the radical consolidation of Ziv’s rounding test that is used, in a more or less empirical way for ages, in the implementation of functions approximation;
• the development of an implementation of the SLZ-algorithm in order to “solve” (in a specific sense that will be made clear) the Table Maker Dilemma; the suitability of this implementation is also challenged for the development of correctly rounded approximations of functions with quad-precision floating-point (aka IEEE-754 Binary128 format) arguments and images.

About the defense, see the report on the intranet of ENS de Lyon.

Rémi Imbach (Inria Nancy – Grand Est, VEGAS Team)

Thursday September 15, 2016 at 10h15, room 116 (site Monod M1)

Certified numerical tools for computing the topology of projected curves

We are interested in computing the topology of the projection of an algebraic or analytic space curve in the plane. Such a projection is not a smooth curve and has singularities. State of the art approaches to compute the topology of algebraic plane curves use symbolic calculus but in the case of a projection, latter approaches suffer from the size of the implicit representation of the curve as a resultant polynomial.
Using numerical tools to compute the projected curve or its singularities is a challenging problem since the projected curve is not smooth, and state-of-the-art characterizations of the singularities use over-determined systems of equations. We will first propose a new characterization of the singularities of the projection of an algebraic curve using a square system polynomials; its solutions are regular and it can be solved numerically.
However its equations are coefficients of resultant polynomials and are still very large polynomials. The demand in arithmetic precision to carry out numerical calculus with such polynomials makes classical solvers either incomplete or dramatically time consuming. We will propose a multi-precision solver using interval subdivision specially designed to handle large polynomials to compute the solutions of this system. We will then consider the more general case of projections of analytic space curves, and propose a geometric approach to describe its singularities. It results in a square system of equations with only regular solutions, that do not involve resultant theory, and that can be solved with our certified numerical solver. Finally we will present a new approach to compute the topology of the projected curve, i.e. find a graph that has the same topology. We use a certified numerical path tracker to enclose the space curve in a sequence of boxes, allowing both to simplify the research of singularities and to compute smooth branches linking singularities.

# 2015-2016

Philippe Dumas (Inria)

Thursday June 30, 2016 at 10h15, room 116 (site Monod M1)

Some solutions of linear Mahler equations

Mahler equations relate evaluations of the same function $$f$$ at iterated $$b$$th powers of the variable. They arise in particular in the study of automatic sequences and in the complexity analysis of divide-and-conquer algorithms. Recently, the problem of solving Mahler equations in closed form has occurred in connection with number-theoretic questions.
A difficulty in the manipulation of Mahler equations is the exponential blow-up of degrees when applying a Mahler operator to a polynomial. We present and analyze polynomial-time algorithms for solving linear Mahler equations for series, polynomials, and rational functions.

Svyatoslav Covanov (Université de Lorraine)

Thursday June 23, 2016 at 10h15, room 116 (site Monod M1)

Fast integer multiplication using generalized Fermat primes

For almost 35 years, Schönhage-Strassen’s algorithm has been the fastest algorithm known for multiplying integers, with a time complexity $$O(n⋅\log n⋅ \log \log n)$$ for multiplying n-bit inputs. In 2007, Fürer proved that there exists $$K>1$$ and an algorithm performing this operation in $$O(n⋅\log n⋅K^{(\log^∗ n)})$$. Recent work by Harvey, van der Hoeven, and Lecerf showed that this complexity estimate can be improved in order to get $$K=8$$, and, via a conjecture on Mersenne primes, $$K=4$$.
This presentation will describe an alternative algorithm using generalized Fermat primes for which $$K=4$$ conjecturally. This algorithm seems more simple than the algorithm relying on Mersenne primes and can be seen as a fix to an approach proposed by Fürer in 1989 using Fermat primes.

AriC team, LIP

Thursday June 16, 2016 at 10h15, room 116 (site Monod M1)

Crypto’s results fair

Short presentation (10mn) of the results submitted to a conference on cryptography.

• Shi Bai: A subfield lattice attack for overstretched NTRU.
Accepted to CRYPTO.
Joint work with Martin Albrecht and Léo Ducas.
• Benoît Libert: Fully Secure Functional Encryption for Inner Products from Standard Assumptions.
Accepted to CRYPTO.
Joint work with Shweta Agrawal and Damien Stehlé
• Fabrice Mouhartem: Lattice-Based Group Encryption.
In progress.
Joint work with Benoît Libert.
• Willy Quach: Circular Security and Fully Homomorphic Encryption.
M2 internship.
• Somindu Ramona: Functional Commitment Schemes: From Polynomial Commitments to Pairing-Based Accumulators from Simple Assumptions.
Accepted to ICALP.
Joint work with Benoît Libert and Moti Yung.

Pierre-Jean Spaenlehauer (Inria Nancy – Grand Est)

Thursday June 9, 2016 at 10h15, room 116 (site Monod M1)

Sparse polynomial systems with many positive solutions from bipartite simplicial complexes

We consider the problem of constructing multivariate polynomial systems with real coefficients, prescribed monomial support and many non-degenerate real solutions with positive coordinates. For some families of monomial supports, we use a version of Viro’s method to construct polynomial systems all complex solutions of which are real and positive. We also use this construction to investigate special cases of the following problem: how many non-degenerate solutions in the positive orthant can a system of n equations in n variables involving at most t distinct monomials have? In particular, we study a family of fewnomial systems obtained by considering a simplicial complex contained in a regular triangulation of the cyclic polytope: using a symbolic-numeric approach to a matrix completion problem, we will present a method to construct a system of 5 equations in 5 variables, involving only 11 distinct monomials and having 38 positive solutions.

Joint work with Frédéric Bihan.

Weiqiang Wen (AriC team, LIP)

Thursday June 2, 2016 at 10h15, room 116 (site Monod M1)

Improved Reduction from the Bounded Distance Decoding Problem to the Unique Shortest Vector Problem in Lattices

We present a probabilistic polynomial-time reduction from the lattice Bounded Distance Decoding (BDD) Problem with parameter $$1/(\sqrt{2} \cdot \gamma)$$ to the unique Shortest Vector Problem (uSVP) with parameter $$\gamma$$ for any $$\gamma>1$$ that is polynomial in the lattice dimension $$n$$. It improves the BDD to uSVP reductions of [Lyubashevsky and Micciancio, CRYPTO, 2009] and [Liu, Wang, Xu and Zheng, Inf. Process. Lett., 2014], which rely on Kannan’s embedding technique. The main ingredient to the improvement is the use of Khot’s lattice sparsification [Khot, FOCS, 2003] before resorting to Kannan’s embedding, in order to boost the uSVP parameter.

Laurent Busé (Galaad, Inria Sophia Antipolice)

Thursday May 26, 2016 at 10h15, room 116 (site Monod M1)

Matrix representations of 3D algebraic parameterizations

3D algebraic parameterizations of curves and surfaces are intensively used in modeling geometry to provide piecewise descriptions of 3D objects. For many purposes it is important to derive implicit representations from parameterizations, for instance visualization or intersection problems. In this talk, I will discuss an elimination method to compute matrix-based implicit representations that can adapt dynamically the geometric properties of a given parameterization, typically its base locus. I will also show how these matrix representations turn several geometric problems into linear algebra problems, allowing a better control of their numerical treatment in the context of approximate data and/or computations.

Cindy Rubio-Gonzalez

Please note: unusual date and place!
Friday May 13, 2016, 9h00-11h00, St Germain au Mont d’Or, in the Domaine des Hautannes

Numerical precision

More information on the page of the thematic school on numerical simulation and polyhedral code optimisations

LIP

Thursday April 28, 2016

Journée “Structures Discrètes”

Thursday April 14, 2016 at 10h15, Amphi I on the 14th (main building, Level 0, North side) site Monod

Post-quantum key exchange — a new hope

At IEEE Security & Privacy 2015, Bos, Costello, Naehrig, and Stebila proposed an instantiation of Peikert’s ring-learning-with-errors–based (Ring-LWE) key-exchange protocol (PQCrypto 2014), together with an implementation integrated into OpenSSL, with the affirmed goal of providing post-quantum security for TLS. In this talk I will present joint work with Alkim, Ducas, and Pöppelmann, in which we revisit their instantiation and stand-alone implementation. Specifically, we propose new parameters and a better suited error distribution, analyze the scheme’s hardness against attacks by quantum computers in a conservative way, introduce a new and more efficient error-reconciliation mechanism, and propose a defence against backdoors and all-for-the-price-of-one attacks. By these measures and for the same lattice dimension, we more than double the security parameter, halve the communication overhead, and speed up computation by more than a factor of 9 in a portable C implementation and by more than a factor of 24 in an optimised implementation targeting current Intel CPUs. These speedups are achieved with comprehensive protection against timing attacks.
Joint work with Erdem Alkim, Léo Ducas and Thomas Pöppelmann.

More information on the Web page of the “Monthly lattice and crypto meetings”

Wednesday 6 April – Friday 8 April, 2016, amphi Descartes, ENS de Lyon

HPC Days in Lyon

Fabrice Benhamouda (ENS Paris)

Thursday March 3, 2016 at 10h15, salle des thèses site Monod

Easing Coppersmith Methods using Analytic Combinatorics: Applications to Public-Key Cryptography with Weak Pseudorandomness

The Coppersmith methods is a family of lattice-based techniques to find small integer roots of polynomial equations. They have found numerous applications in cryptanalysis and, in recent developments, we have seen applications where the number of unknowns and the number of equations are non-constant. In these cases, the combinatorial analysis required to settle the complexity and the success condition of the method becomes very intricate. We provide a toolbox based on analytic combinatorics for these studies. It uses the structure of the considered polynomials to derive their generating functions and applies complex analysis techniques to get asymptotics. The toolbox is versatile and can be used for many different applications, including multivariate polynomial systems with arbitrarily many unknowns (of possibly different sizes) and simultaneous modular equations over different moduli. To demonstrate the power of this approach, we apply it to recent cryptanalytic results on number-theoretic pseudorandom generators for which we easily derive precise and formal analysis. We also present new theoretical applications to two problems on RSA key generation and randomness generation used in padding functions for encryption.

Joint work with Céline Chevalier, Adrian Thillard, and Damien Vergnaud.

More information on the Web page of the “Monthly lattice and crypto meetings”

AriC team, LIP

Thursday February 18, 2016 at 10h15, room 116 (site Monod M1)

ISSAC’ and ARITH’ results fair

Short presentation (10mn) of the results submitted either to ARITH23 or to ISSAC 2016.

• Claude-Pierre Jeannerod, Nicolas Louvet, Jean-Michel Muller and Antoine Plet: A library for symbolic floating-point arithmetic
• Vincent Lefèvre: Correctly Rounded Arbitrary-Precision Floating-Point Summation
• Julien Le Maire, Nicolas Brunie, Florent de Dinechin and Jean-Michel Muller: Computing floating-point logarithms with fixed-point operations
• Clément Pernet: Computing with Quasi-Separable Matrices
• Stephen Melczer and Bruno Salvy: Symbolic-Numeric Tools for Analytic Combinatorics in Several Variables
• Jean-Guillaume Dumas, Erich Kaltofen, Emmanuel Thomé and Gilles Villard:Linear time interactive certificates for the minimal polynomial and the determinant of a sparse matrix

AriC team, LIP

Thursday February 11, 2016 at 10h15, room 116 (site Monod M1)

ISSAC’ and ARITH’ results fair

Short presentation (10mn) of the results submitted either to ARITH23 or to ISSAC 2016:

• Jean-Michel Muller, Valentina Popescu and Peter Tang: A new multiplication algorithm for extended precision using floating-point expansions
• Claude-Pierre Jeannerod, Vincent Neiger, Eric Schost and Gilles Villard: Fast computation of minimal interpolation bases in Popov form for arbitrary shifts
• Vincent Neiger: Fast computation of shifted Popov forms of polynomial matrices via systems of modular polynomial equations
• Alin Bostan, Gilles Christol, and Philippe Dumas: Fast computation of the Nth term of an algebraic series in positive characteristic
• Alin Bostan, Louis Dumont and Bruno Salvy: Efficient Algorithms for Mixed Creative Telescoping

Marie Paindavoine (AriC team, LIP and Applied Crypto Group, Orange Labs)

Thursday January 28, 2016 at 10h15, room 116 (site Monod M1)

Speeding up fully homomorphic encryption

Access to encrypted data is often “all-or-nothing” : either one has to decrypt everything, or one cannot read anything. Following Gentry’s work in 2009, fully homomorphic encryption has gained more and more attention, in particular because it allows more flexibility on encrypted data access and process. It is now possible to evaluate functions on encrypted data and get only the result – encrypted as well. This possibility offers numerous applications, especially when building a trustworthy cloud provider where the server cannot access the users’ data. It theoretically reconciles a rich user experience with protection of her sensible data. However, efficiency of fully homomorphic encryption remains seriously undermined by a very costly procedure: the “bootstrapping”. In this talk, we will show how to use graph problems and integer linear programming in order to determine the minimal number of bootstrappings necessary to correctly evaluate a circuit over encrypted data. Our method allows significant efficiency gains in the evaluation process, saving up to 70% bootstrappings calls. This is a joint work with Bastien Vialla.

Thursday January 21, 2016 at 10h15: NO SEMINAR

Reminder: Journées du GDR IM, LIPN, Université Paris 13 (campus de Villetaneuse), 18-20 January 2016.
Lattice Meetings, 21 January 2016 at ENS Paris.

Sylvain Collange (ALF team, Inria Rennes / IRISA)

Thursday January 14, 2016 at 10h15, room 116 (site Monod M1)

General-purpose SIMT processors

We propose to generalize the SIMT execution model of GPUs to general-purpose processors, and use it to design new CPU-GPU hybrid cores. These hybrid cores will form the building blocks of heterogeneous architectures bringing together CPU-like cores and GPU-like cores that all share the same instruction set and programming model.
The SIMT execution model used on some GPUs binds together multiple threads in parallel applications and run them in lockstep so that multiple threads perform the same instruction at the same time on different data. This allows to execute the dynamic vector instructions on energy-efficient SIMD units.
Unfortunately, current GPU architectures lack the flexibility to work with standard instruction sets like x86 or ARM. Their implementation of SIMT requires special instruction sets with control-flow reconvergence annotations, and they do not support complex control flow like exceptions, context switches and thread migration. This prevents the use of a general-purpose system software stack and breaks compatibility with CPU environments.
We will see how we can overcome all of these limitations and extend the SIMT model to conventional instruction sets by using a PC-based instruction fetch policy. In addition to enabling SIMT execution of conventional instruction sets, this solution enables key improvements that were not possible in the traditional SIMD model, such as simultaneous execution of divergent paths. It also opens the way for a whole spectrum of new architectures, hybrids of latency-oriented superscalar processors and throughput oriented SIMT GPUs.

No AriC seminar on Thursday December 17, 2015 at 10h15.

Talk by Alin Bostan at Séminaire SIESTE : Séminaire d’Informatique pour les Étudiant.e.s, Scientifiques, et Tous ceux que l’informatique intéresse à l’ENS Lyon, Tuesday December 15, 13h30, amphi B site Monod.

Journée Structures Discrètes on Thursday December 17.

Arnold Neumaier (University of Vienna, Austria, within AriC team, LIP, until the end of December 2015)

Thursday December 10, 2015 at 10h15, room 116 (site Monod M1)

New basis reduction algorithms for lattices

In this talk I first review the state of the art in basis reduction techniques for lattices (relevant among others in cryptography and in linear integer programming).
Then I describe improved analysis techniques discovered during my current sabbatical at ENS Lyon. They allow one to prove for new variations of traditional reduction algorithms complexity bounds and reduced basis guarantees that are both stronger than the existing results.

Mohsin Javed (Numerical Analysis Group, University of Oxford)

Please note the unusual date and time.Tuesday December 8, 14h room 116 (site Monod M1)

Polynomial and Rational Approximations in Chebfun and Applications to Digital Filters

In this talk we will look at algorithms designed in Chebfun to compute best approximations of continuous periodic and aperiodic functions. We will also discuss Chebfun’s capabilities to compute best approximations on compact subsets of an interval and how these methods can be used to design FIR digital filters. In addition, we will also look at rational and Fourier-Padé approximations and explore how these methods may be used to design recursive IIR filters.
The talk will also discuss the closely related Caratheodory-Fejer (CF) approximation of a function. The CF approximation is a near best approximation that can be computed numerically by solving an eigenvalue problem. CF approximation for a very modest degree can be within machine precision of the best polynomial and rational approximation of a given smooth function. From a numerical point of view, it is much more interesting to consider the rational as opposed to the polynomial CF approximation problem. In particular, the rational CF approximation can be an excellent initial guess for the true best approximation. We will explore these topics during the talk, however, this is still very much a work in progress.

Ludovic Perret (UPMC)

Thursday November 26, 2015 at 10h15, lecture room 116 site Monod

Algebraic Algorithms for LWE

The Learning with Errors (LWE) problem, proposed by Regev in 2005, has become an ever-popular cryptographic primitive, due mainly to its simplicity, flexibility and convincing theoretical arguments regarding its hardness. Among the main proposed approaches to solving LWE instances — namely, lattice algorithms, combinatorial algorithms, and algebraic algorithms — the last is the one that has received the least attention in the literature, and is the focus of this talk. We present a detailed and refined complexity analysis of the original Arora-Ge algorithm, which reduced LWE to solving a system of high-degree, error-free polynomials. Moreover, we generalise their method and establish the complexity of applying Gröbner basis techniques from computational commutative algebra to solving LWE. As a result, we show that the use of Gröbner basis algorithms yields an exponential speed-up over the basic Arora-Ge algorithm. On the other hand, our results show that such techniques do not yield a sub exponential algorithm for the LWE problem. We also apply our algebraic algorithm to the BinaryError-LWE problem, which was recently introduced by Micciancio and Peikert. We show that BinaryError-LWE in dimension $$n$$ can be solved in subexponential time given access to a quasi-linear number of samples $$m$$ under a regularity assumption. We also give precise complexity bounds for BinaryError-LWE in function of the number of samples. The results in this work depend crucially on the assumption that the encountered systems have no special structure. We give experimental evidence that this assumption holds and also prove the assumptions in some special cases. Therewith, we also make progress towards proving Froberg’s long-standing conjecture from algebraic geometry.

Joint work with Martin R. Albrecht, Carlos Cid and Jean-Charles Faugère.

More information on the Web page of the “Monthly lattice and crypto meetings”

Stephen Melczer (AriC team, LIP, ENS de Lyon and University of Waterloo)

Thursday November 19, 2015 at 10h15, lecture room 116 site Monod

Analytic combinatorics in several variables and lattice path enumeration

Recent work in the study of analytic combinatorics in several variables has shown how to derive asymptotics for the coefficients of certain families of D-finite functions by representing them as diagonals of multivariate rational functions. Although a theoretical framework has been laid over the last two decades (and before), there are still obstacles which has made the ultimate dream of an effective implementation of these techniques elusive. We begin with an overview of the classical (univariate) theory of analytic combinatorics, followed by a survey of the differences and difficulties present in the multivariate extension. Finally, we look at applications of these methods to the enumeration of lattice paths in restricted regions.

This is joint work with Mark Wilson, George Labahn, Bruno Salvy, and Marni Mishna.

Ignacio Garcia-Marco (Team MC2, LIP, ENS de Lyon)

Thursday October 22, 2015 at 10h15, lecture room 116 site Monod

Lower Bounds by Birkhoff Interpolation

In this talk we provide some lower bounds for the representation of real univariate polynomials as sums of powers of degree 1 polynomials. We present two families of polynomials of degree $$d$$ such that the number of powers that are required in such a representation must be at least of order $$d$$. This is clearly optimal up to a constant factor. Previous lower bounds for this problem were only of order $$\Omega(\sqrt{d})$$, and were obtained from arguments based on Wronskian determinants and “shifted derivatives”. We obtain this improvement thanks to a new lower bound method based on Birkhoff interpolation (also known as “lacunary polynomial interpolation”).

Preprint available here.

This is a joint work with Pascal Koiran.

Clément Pernet (Team AriC, LIP, on partial secondment from U. Grenoble Alpes)

Thursday October 8, 2015 at 10h15, lecture room 116 site Monod

Computing the Rank Profile Matrix.

The row (resp. column) rank profile of a matrix describes the stair case shape of its row (resp. column) echelon form.
We will first propose a recursive Gaussian elimination algorithm that can compute simultaneously the row and column rank profiles of a matrix, as well as those of all of its leading sub-matrices, in the same time as state of the art Gaussian elimination algorithms. More generally, we will study the conditions making a Gaussian elimination algorithm reveal this information. We propose the definition of a new matrix invariant, the rank profile matrix, summarizing all information on the row and column rank profiles of all the leading sub-matrices. We also explore the conditions for a Gaussian elimination algorithm to compute all or part of this invariant, through the corresponding PLUQ decomposition. As a consequence, we show that the classical iterative CUP decomposition algorithm can actually be adapted to compute the rank profile matrix. Used, in a Crout variant, as a base-case to the recursive algorithm, it delivers a significant improvement in efficiency. Lastly, the row (resp. column) echelon form of a matrix are usually computed via different dedicated triangular decompositions. We show here that, from some PLUQ decompositions, it is possible to recover the row and column echelon forms of a matrix and of any of its leading sub-matrices thanks to an elementary post-processing algorithm.

Joint work with Jean-Guillaume Dumas and Ziad Sultan.

Khoa Nguyen (NTU, Singapore)

Thursday October 1st, 2015 at 10h15, lecture room 116 site Monod

A provably secure group signature scheme from code-based assumptions

We solve an open question in code-based cryptography by introducing the first provably secure group signature scheme from code-based assumptions. Specifically, the scheme satisfies the CPA-anonymity and traceability requirements in the random oracle model, assuming the hardness of the McEliece problem, the Learning Parity with Noise problem, and a variant of the Syndrome Decoding problem. Our construction produces smaller key and signature sizes than the existing post-quantum group signature schemes from lattices, as long as the cardinality of the underlying group does not exceed the population of the Netherlands (approx $$2^{24}$$ users). The feasibility of the scheme is supported by implementation results. Additionally, the techniques introduced in this work might be of independent interest: a new verifiable encryption protocol for the randomized McEliece encryption and a new approach to design formal security reductions from the Syndrome Decoding problem.

The talk will be based on this article. Joint work with Martianus Frederic Ezerman, Hyung Tae Lee, San Ling and Huaxiong Wang.

More information on the Web page of the “Monthly lattice and crypto meetings”

Fabrice Mouhartem (LIP, ENS Lyon)

Thursday September 24, 2015 at 10h15, lecture room 116 site Monod

Lattice-based group signatures for dynamic groups

Introduced by Chaum and Van Heyst in 1991, group signatures provide a way for a person to sign messages on behalf of a group he belongs to while remaining anonymous inside this group. At the same time, users remain accountable for their signatures: if necessary, an opening authority is able to trace any signature back to its issuer. The concept has many applications, going from trusted computing to anonymous access control (e.g., to public transportation systems or to the building of a company).
In 2010, Gordon et al. described a first realization based on lattice assumptions, which was subsequently improved in several works. For the time being, all known lattice-based constructions are for static groups, where no new member can be added after the setup phase. Designing a dynamic group signature based on lattices has remained an open problem for the last four years. In this work, we propose a first dynamic group signature based on standard lattice assumptions (in the random oracle model): the Learning-with Errors (LWE) and the Short Integer Solution (SIS) problems. To achieve this, we adapt a construction due to Ling et al. (PKC’15) by introducing a joining mechanism – namely a protocol whereby new members can dynamically join the group – which provides security against framing attacks: a security notion proper to the dynamic case. We design this mechanism by combining the Boyen signature scheme (PKC’10) with the Böhl et al. signature (Eurocrypt’13). In particular, the latter technique makes it possible to introduce a so-called membership secret which is only known to the introduced group member and correspond to a public syndrome that gets certified by the group manager during the joining protocol. Our security proof requires an analysis based on the Rényi divergence (rather than the statistical distance) in order to enable a more efficient choice of parameters.

Sylvain Chevillard (APICS team, INRIA Sophia-Antipolis)

Thursday September 17, 2015 at 10h15, lecture room 116 site Monod

Minimax principle and lower bounds in $$H^2$$ rational approximation

I will present methods allowing one to derive some lower bounds in rational approximation of given degree to functions in the Hardy space H2 of the disk. The algorithms that solve the problem of best rational approximation to a function usually only find a ‘candidate’, in the sense that they find a local minimum of the criterion, with good hope that this minimum is indeed global. Providing a good lower bound to this problem is thus an interesting complement to such solvers, as it gives an interval of confidence for the true optimal value.
A first method is based on Adamjan-Arov-Krein theory and leads to a lower bound that is fairly easy to compute from a numerical point of view. This bound is also of theoretical interest, as it allowed us to derive asymptotic errors rates in approximation to Blaschke products and to Cauchy integrals on geodesic arcs.
A second method is based on linearized errors and leads to more involved numerical computations, less easily implemented. Yet, results on a few examples show that this method can compute lower bounds that are fairly sharp with respect to the true optimal error.
I will present both methods and discuss the difficulties in their practical implementation. They will be illustrated by numerical results on a few examples.

This is a joint work with Laurent Baratchart and Tao Qian.

Arnold Neumaier (U. Vienna, Austria and Team AriC, LIP, ENS de Lyon)

Thursday September 10, 2015 at 10h15, lecture room 116 site Monod

Integer least squares and basis reduction

The solution of integer least squares problems is closely related to certain lattice problems, in particular the closest vector problem and basis reduction. Applications in parameter estimation problems have different characteristics from those in number theory, computer algebra, or cryptography in that the initial basis is known only to limited accuracy, and the demands on the quality of the solution are therefore moderate only.
In this lecture I’ll give an introduction to the problem, outline methods and their properties. I conclude with a list of lattice-related problems that I am interested in seeing solved.

# 2014-2015

Jean-Marc Robert (Team DALI-LIRMM, Université de Perpignan Via Domitia)

Please note the unusual time (a bit later than usual):
Thursday July 2, 2015 at 10h15, lecture room 116 site Monod

New Parallel Approaches for Scalar Multiplication in Elliptic Curve over Fields of Small Characteristic

We present two new strategies for parallel implementation of scalar multiplication over elliptic curves. For curves $$E(\mathbb{F}_{2^m})$$, we first introduce a Montgomery-halving algorithm which is a variation of the original Montgomery-ladder for point multiplication. This Montgomery-halving can be run in parallel with the original Montgomery-ladder in order to concurrently compute part of the scalar multiplication. We also present two point thirding formulas in some subfamilies of curves $$E(\mathbb{F}_{3^m})$$. We use these thirding formulas to implement scalar multiplication through (Third, Double)-and-add and (Third, Triple)-and-add parallel approaches. We also provide some implementation results of the presented parallel strategies which show a speed-up of 5%-14% on an Intel Core i7 processor and a speed-up of 8%-19% on a Qualcomm Snapdragon processor compared to non-parallelized approaches.

Keywords: Elliptic curve, small characteristic, scalar multiplication, Montgomery, thirding, parallel implementation.

Simone Naldi (MAC Team – LAAS/CNRS, Toulouse and PolSys Team – LIP6, Paris)

Thursday June 11, 2015 at 10h15, lecture room 116 site Monod

Exact algorithms for linear matrices towards semidefinite programming

Linear matrices and their loci of rank defects arise as natural algebraic structures both in theoretical and in applicative problems, of particular interest in real algebraic geometry, polynomial optimization, control theory. For example, the feasible region of a semidefinite program, called a spectrahedron, is a convex set whose algebraic boundary is shaped by the determinant of a symmetric linear matrix. Hence the solution to such an optimization problem corresponds to a positive semidefinite linear matrix with rank defects.

In this talk I will show how to design symbolic algorithms whose input is a linear matrix A and an expected maximal rank r, and whose output is a sample set of real points lying in the correspondent locus of rank defect. These algorithms and their complexity take strong advantage of the determinantal structure and of additional relations amongs the entries of A. Moreover, I will show how they can be used to decide the emptiness of low rank real loci and of spectrahedra, and hence to decide the feasibility of semidefinite programs. The techniques involved can represent a basis to build a stronger symbolic approach to semidefinite programming.

The results I will present are joint work with Didier Henrion and Mohab Safey El Din.

Dario Catalano (U. Catania)

Unusual date! Friday May 29, 2015 at 10h15, lecture room 116 site Monod

Programmable Hash Functions go Private: Constructions and Applications to (Homomorphic) Signatures with Shorter Public Keys

We introduce the notion of asymmetric programmable hash functions (APHFs, for short), which adapts Programmable Hash Functions, introduced by Hofheinz and Kiltz at Crypto 2008, with two main differences. First, an APHFs works over bilinear groups. Moreover it is asymmetric in the sense that while only secretly computable, it admits an isomorphic copy which is publicly computable.
Second, in addition to the usual programmability, APHFs may have an alternative property that we call programmable pseudorandomness. In a nutshell, this property states that it is possible to embed a pseudorandom value as part of the function’s output, akin to a random oracle.
In spite of the apparent limitation of being only secretly computable, APHFs turn out to be surprisingly powerful objects. We show that they can be used to generically implement both regular and linearly-homomorphic signature schemes in a simple and elegant way.
More importantly, when instantiating these generic constructions with our concrete realizations of APHFs, we obtain:

(1) the first linearly-homomorphic signature (in the standard model) whose public key is sub-linear in both the dataset size and the dimension of the signed vectors.

(2) short signatures (in the standard model) whose public key is much shorter than those by Hofheinz-Jager-Kiltz from Asiacrypt 2011, and essentially the same as those by Yamada, Hannoka, Kunihiro, (CT-RSA 2012).

This is a joint work with Dario Fiore and Luca Nizzardo (IMDEA Software Institute, Madrid)

Amal Khabou (Postale team, équipe Systèmes Parallèles, LRI, U. Paris Sud)

Thursday May 28, 2015 at 10h15, lecture room 116 site Monod

Block LU factorization with panel Rank Revealing Pivoting and its Communication Avoiding version

We present a block LU factorization with panel rank revealing pivoting (block LU_PRRP), an algorithm based on strong rank revealing QR for the panel factorization.
Block LU_PRRP is more stable than Gaussian elimination with partial pivoting (GEPP), with a theoretical upper bound of the growth factor of $$(1+ \tau b)^{(n/ b)-1}$$, where $$b$$ is the size of the panel used during the block factorization, $$\tau$$ is a parameter of the strong rank revealing QR factorization, and $$n$$ is the number of columns of the matrix. For example, if the size of the panel is $$b = 64$$, and $$\tau = 2$$, then $$(1+2b)^{(n/b)-1} = (1.079)^{n-64} \ll 2^{n-1}$$, where $$2^{n-1}$$ is the upper bound of the growth factor of GEPP. Our extensive numerical experiments show that the new factorization scheme is as numerically stable as GEPP in practice, but it is more resistant to some pathological cases where GEPP fails. We note that the block LU_PRRP factorization does only $$O(n^2 b)$$ additional floating point operations compared to GEPP.

Alexandre Chapoutot (ENSTA ParisTech)

Thursday May 21, 2015 at 10h15, lecture room 116 site Monod

Validated Explicit and Implicit Runge-Kutta Methods

The guaranteed solution of initial value problem of ordinary differential equations is well studied from interval analysis community. In the most of the cases Taylor series are used in this context. In contrast, in numerical analysis community other numerical integration methods, e.g., Runge-Kutta methods, are used. Indeed, these methods have very good stability properties and they can be applied on a wide variety of problems.
We propose a new method to validate the solution of initial value problem of ordinary differential equations based on Runge-Kutta methods. The strength of our contribution is to adapt any explicit and implicit Runge-Kutta methods to make them guaranteed. We experimentally verify our approach against Vericomp benchmark. We hence extend our previous work on explicit Runge-Kutta methods.

Karim Bigou (CAIRN team, IRISA laboratory)

Tuesday May 5 at 10h15, lecture room B1 site Monod

RNS Modular Computations for Cryptographic Applications

The residue number system (RNS) is a number representation based on the Chinese remainder theorem (CRT), which splits large integers into small independent chunks in order to speed up the computations. This representation is more and more used for implementations of asymmetric cryptography, because RNS provides various interesting properties in hardware. As a first step, this talk will recall the elementary properties of RNS, and a few classical algorithms proposed in the state-of-art. Then, some recent algorithmic propositions are presented, on RNS modular inversion, modular multiplication and modular exponentiation, for specific cryptographic applications.

Valentina Popescu (AriC team, LIP)

Thursday April 30, 2015 at 10h15, lecture room 116 site Monod

CAMPARY (CudA Multiple Precision ARithmetic librarY)

Nowadays, the most common way of representing real numbers in a computer is the binary floating-point (FP) format. The single (binary32) and double (binary64) formats are the most common, specified by the IEEE 754-2008 standard together with basic arithmetic operations (+,-,*,/,sqrt()) or rounding modes. However, many numerical problems require a higher computing precision (several hundred bits) than the one offered by these standard formats. One common way of extending the precision is to represent numbers in a multiple-term format. By using the so-called floating-point expansions, real numbers are represented as the unevaluated sum of standard machine precision FP numbers. Such a representation is possible thanks to the availability of error-free transforms. These are algorithms for computing the error of a FP addition or multiplication exactly, taking the rounding mode into account and using directly available, hardware implemented FP operations.

In this talk, we present CAMPARY (CudA Multiple Precision ARithmetic librarY), a multiple-precision floating-point arithmetic library developed in the CUDA programming language for the NVIDIA GPU platform, that uses the multiple-term approach. We provide support for all basic arithmetic operations, but here we will focus on a new normalization algorithm. This operation is very important when FP expansions are used, since it ensures certain precision related requirements and allows for a more compact representation. Finally, we present a new method for computing the reciprocal and the square root of a FP expansion based on an adapted Newton-Raphson iteration where the intermediate calculations are done using “truncated” operations (additions, multiplications) involving FP expansions.

Olga Kupriianova (Pequan team, LIP6)

Tuesday April 28, 2015 at 15h15, lecture room 116 site Monod

Code generation for mathematical functions

There are already several mathematical libraries (libms): collections of code to evaluate some mathematical functions. They differ by the result’s accuracy, language, developer teams, etc. The common thing for all existing libms is that they do not give enough choice for user. E.g. you can’t choose between fast or accurate implementations, while for applications on the large sets of data the speed is crucial.
We try to rewrite the existing libm in a more flexible way. As there are plenty of function variations, it is impossible to write all of them manually, and we need the code generator. We pass the implementation parameters e.g. the results accuracy, implementation domain (as well as the function itself!) to the generator to get the corresponding implementation. Our black-box generator produces code for the functions from the standard libms, applies the needed argument reduction and (or) splits the domain. In the last case, there are some attempts to get the reconstruction code without branching.

Louis Dumont (École polytechnique)

Thursday April 9, 2015 at 10h15, lecture room 116 site Monod

Algebraic Diagonals and Walks

The diagonal of a multivariate power series F is the univariate power series Diag F generated by the diagonal terms of F. Diagonals form an important class of power series; they occur frequently in number theory, theoretical physics and enumerative combinatorics. We study algorithmic questions related to diagonals in the case where F is the Taylor expansion of a bivariate rational function. It is classical that in this case Diag F is an algebraic function. We propose an algorithm that computes an annihilating polynomial for Diag F. Generically, it is its minimal polynomial and is obtained in time quasi-linear in its size. We show that this minimal polynomial has an exponential size with respect to the degree of the input rational function. We then address the related problem of enumerating directed lattice walks. The insight given by our study leads to a new method for expanding the generating power series of bridges, excursions and meanders. We show that their first N terms can be computed in quasi-linear linear time in N, without first computing a very large polynomial equation.

Thierry Dumont (ICJ, U. Lyon 1 et CNRS)

Thursday April 2, 2015 at 10h15, lecture room 116 site Monod

How to make scientific computations (a bit more) secure?
A numerical analysis point of view.

Many time and space dependent problems from physics, biology etc. have a strong multiscale character. Classical numerical approaches using fixed discretization steps in space and in time cannot solve them, or can give wrong results. I will explain how automatic adaptive methods overcome (partially) these difficulties. I will put the emphasis on modern Runge-Kutta methods for multiscale (aka. stiff) Ordinary Differential Equations and divide and conquer methods (operator splitting).

Fredrik Johansson (LFANT, INRIA, IMB)

Thursday March 26, 2015 at 10h15, lecture room 116 site Monod

Special functions in the Arb library

We discuss methods used for rigorous evaluation of special functions in Arb, a C library for arbitrary-precision real and complex ball (interval) arithmetic. We have implemented many of the familiar special functions such as erf, gamma, zeta, polylogarithms, generalized exponential integrals, Bessel functions, theta and elliptic functions. In most cases, we support complex values of all inputs as well as computation of power series expansions (high-order derivatives). Our implementations are competitive with existing non-interval high-precision software, and often significantly faster. One highlight is a new algorithm for elementary functions, giving up to an order of magnitude speedup at “medium” precision (approximately 200-2000 bits).

Laure Gonnord (LIP et U. Lyon 1)

Thursday March 12, 2015 at 10h15, lecture room 116 site Monod

Validation of Memory Accesses Through Symbolic Analyses

Joint work with F. Pereira and his collegues @Univ Mineas Gerais, Bresil. Published at oopsla’14.

The C programming language does not prevent out-of-bounds memory accesses. There exist several techniques to secure C programs; however, these methods tend to slow down these programs substantially, because they populate the binary code with runtime checks. To deal with this problem, we have designed and tested a succession of static analyses to remove the majority of these guards.

We validate these static analyses by incorporating our findings into AddressSanitizer : we generate SPEC CINT 2006 code that is 17% faster and 9% more energy efficient than the code produced originally by this tool.

In this talk I will present some of the static analyses that we designed in this paper, focusing on our new “symbolic range” analysis, and also the techniques we use to be able to scale well.

Brice Boyer (LIP6)

Thursday February 26, 2015 at 10h15, lecture room 116 site Monod

Numerical Linear System Solving With Parametric Entries By Error Correction

We consider the problem of solving a full rank consistent linear system $$A(u)​ ​x = b(u)$$ where $$A \in K[u]^{m×n}$$ and $$b \in K[u]^m$$. Our algorithm computes the unique solution $$x = f(u)/g(u)$$ (a vector of rational functions) by evaluating the parameter $$u$$ at distinct points. Those points where the matrix $$A$$ evaluates to $$A( x_\lambda)$$ of lower rank, or in the numeric setting to an ill-conditioned matrix, are not identified but accounted for by error-correcting code techniques. We also correct true errors where the evaluation at some $$u = x_\lambda$$ results in an erroneous, possibly full rank consistent and well-conditioned scalar linear system. We have implemented our algorithm in Maple with floating point arithmetic. For the determination of the exact numerator and denominator degrees and number of errors we use SVD based numeric rank computations. The arising linear systems for the error-corrected parametric solution are shown to be well-conditioned even when the input scalars have noise.

Joint work with Erich L. Kaltofen.

Thomas Prest (ENS Paris)

Thursday January 29, 2015 at 10h15, LUG meeting room.

Algorithmes pour les réseaux idéaux : orthogonalisation de Gram-Schmidt en temps quadratique et Gaussian Sampling en espace linéaire

Les réseaux idéaux sont largement utilisés en cryptographie sur les réseaux. Dans de tels réseaux, une base peut être obtenue à partir de $$n$$ “translations” d’un seul vecteur. Cette structure permet généralement de gagner un facteur $$\tilde{O}(n)$$ en espace et en temps sur de nombreuses opérations cryptographiques.

Cependant, jusqu’à présent, le calcul de l’orthogonalisation de Gram-Schmidt des bases de tels réseaux ne tirait pas parti de cette structure. La procédure de Gaussian Sampling décrite par Gentry, Peikert et Vaikuntanathan, qui est également celle qui produit les plus courts vecteurs, n’en profitait pas non plus.

Les vecteurs des bases utilisées pour les réseaux idéaux sont reliés par une isométrie. Plus précisément, il existe une isométrie $$r$$ et un vecteur $$b$$ tels qu’une base d’un réseau idéal peut s’écrire sous la forme $$(b,r(b),…,r^{n-1}(b))$$. Cet exposé montre que si cette isométrie peut être calculée en temps $$O(n)$$, alors il est possible de calculer l’orthogonalisation ainsi que la décomposition QR d’une telle base en temps $$O(n^2)$$ au lieu de $$O(n^3)$$.

Cette amélioration profite également au Gaussian Sampling. Il devient en effet possible de calculer à la volée les $$n$$ vecteurs orthogonalisés nécessaires à cette opération, au lieu de les stocker comme cela était auparavant nécessaire. Cela permet alors d’effectuer le Gaussian Sampling décrit par Gentry-Peikert-Vaikuntanathan en espace $$O(n)$$ au lieu de $$O(n^2)$$ auparavant, et ce en conservant un temps d’exécution en $$O(n^2)$$.

Les travaux présentés ont été réalisés en commun avec Vadim Lyubashevsky.

Bastien Vialla (LIRMM).

Thursday December 18, 2014 at 10h15, LUG meeting room.

Efficient Sparse Sparse Matrix Vector Product over Finite Fields

The key operation of iterative algorithms in sparse linear algebra is the sparse matrix vector product (SpMV) or sparse matrix multi-vector product (SpMM) as most of the computation time is spent in those operations.
Nowadays, the peak performance of CPUs is increased by adding more cores and more vector instructions (SIMD), then it is crucial to consider those aspects when designing algorithms and data structures.

In this talk, I will show how to modify current sparse matrix storage and modular arithmetic to take into account SIMD operations. I will also talk about the use of Residue Number System (RNS) to design efficient medium precision arithmetic (< 2048bits) with application to the sparse matrices arising from the discrete logarithm problem over Finite Fields.

Roman Iakymchuk (LIP6).

Thursday December 11, 2014 at 10h15, LUG meeting room.

Numerical Reproducibility of BLAS routines for ExaScale Computing

As Exascale computing (1018 operations per second) is likely to be reached within a decade, getting accurate results in floating-point arithmetic on such computers will be a challenge. However, another challenge will be the reproducibility of the results — meaning getting a bitwise identical floating-point result from multiple runs of the same code — due to non-associativity of floating-point operations and dynamic scheduling on parallel computers.

In this talk, I will present a reproducible and accurate (rounding to the nearest) algorithms for the fundamental linear algebra operations — like the ones included in the BLAS library — in parallel environments such as Intel server CPUs, Intel Xeon Phi, and both NVIDIA and AMD GPUs. I will show that the performance of our algorithms is comparable with the classic BLAS routines.

Gottfried Herold (Ruhr-Universität Bochum).

Thursday November 20, 2014 at 10h15, LUG meeting room.

Matrix Assumptions and Polynomial Spaces.

One of the most important cryptographic assumptions is arguably the Decisional Diffie-Hellman Assumption, asking to tell whether a group element is of the form $$g^{ab}$$, given $$g$$, $$g^a$$, $$g^b$$. Unfortunately, this assumption is know not to hold in groups that allow a (symmetric) pairing, so we need to replace DDH by other assumptions like the Linear Assumption or, more generally, any matrix assumption. In general, the type of assumption that we consider is of the form that some elementary problems from linear algebra like telling whether a vector is the image of a matrix, is infeasible if we are only given the inputs “in the exponent”.

For assumptions of cryptographic interest, the matrices involved are usually structured. This allows us to identify the vector in the exponent with a polynomial in a meaningful way. This additional structure arising from polynomials then leads to some interesting applications and we will for improvements in efficiency and to gain new insights into existing constructions.

Julien Langou (University of Colorado Denver).

Thursday October 30, 2014 at 10h15, LUG meeting room.

Numerical stability of the Gram-Schmidt algorithm.

We survey results of the numerical stability of the Gram-Schmidt algorithm. We will consider a variety of variants: Modified Gram-Schmidt, Classical Gram-Schmidt, Iterated Gram-Schmidt, Iterated Gram-Schmidt with a criterion, Gram-Schmidt with a non-Euclidean inner product. We will see various results along the years starting in 1967 and ending in 2014.

Alon Rosen (IDC Herzliya).

Thursday October 16, 2014 at 14h00, LUG meeting room.

An Algebraic Approach to Non-Malleability.

In their seminal work on non-malleable cryptography, Dolev, Dwork and Naor, showed how to construct a non-malleable commitment with logarithmically-many “rounds”/”slots”, the idea being that any adversary may successfully maul in some slots but would fail in at least one. Since then new ideas have been introduced, ultimately resulting in constant-round protocols based on any one-way function. Yet, in spite of this remarkable progress, each of the known constructions of non-malleable commitments leaves something to be desired.

In this talk I will present a new technique for constructing non-malleable protocols with only a single “slot”, to improve in at least one aspect over each of the previously proposed protocols. Two direct byproducts of our new ideas are a four round non-malleable commitment and a four round non-malleable zero-knowledge argument, the latter matching the round complexity of the best known zero-knowledge argument (without the non-malleability requirement). The protocols are based on the existence of one-way functions and admit very efficient instantiations via standard homomorphic commitments and sigma protocols.

Our analysis relies on algebraic reasoning, and makes use of error correcting codes in order to ensure that committers’ tags differ in many coordinates. One way of viewing our construction is as a method for combining many atomic sub-protocols in a way that simultaneously amplifies soundness and non-malleability, thus requiring much weaker guarantees to begin with, and resulting in a protocol which is much trimmer in complexity compared to the existing ones.

Joint work with Vipul Goyal, Silas Richelson and Margarita Vald.

Paola Boito (Université de Limoges)

Thursday October 16, 2014 at 10h15, LUG meeting room

Calcul rapide des valeurs propres pour une classe de matrices structurées

Une approche classique au calcul numérique des racines d’un polynôme donné consiste à déterminer les valeurs propres de la matrice compagnon. Cette méthode se décline en plusieurs variantes selon la base polynomiale choisie et peut aussi être formulée en termes de faisceaux de matrices.

Le matrices qui interviennent dans cette approche ont généralement une structure de rang particulière : il s’agit de matrices quasiséparables. La structure quasiséparable permet de développer des versions rapides de l’algorithme QR/QZ pour le calcul des valeurs propres ; la complexité passe de $$O(n^3)$$ opérations en virgule flottante pour le QR/QZ classique à $$O(n^2)$$ opérations pour la version structurée.

Dans cet exposé, je présenterai les résultats obtenus depuis 2008 sur ce sujet, en collaboration avec Yuli Eidelman et Luca Gemignani. Nous verrons en particulier les aspects théoriques du problème, les propriétés et la représentation des matrices quasiséparables, les implantations réalisées, les questions ouvertes et quelques perspectives futures.

Martin Albrecht (University of London).

Thursday October 9, 2014 at 10h15, LUG meeting room.

Algebraic Algorithms for LWE Problems.

We analyse the complexity of algebraic algorithms for solving systems of linear equations with noise. Such systems arise naturally in the theory of error-correcting codes as well as in computational learning theory. More recently, linear systems with noise have found application in cryptography. The Learning with Errors (LWE) problem has proven to be a rich and versatile source of innovative cryptosystems, such as fully homomorphic encryption schemes. Despite the popularity of the LWE problem, the complexity of algorithms for solving is not very well understood, particularly when variants of the original problem are considered. Here, we focus on and generalise a particular method for solving these systems, due to Arora & Ge, which reduces the problem to non-linear but noise-free system solving. Firstly, we provide a refined complexity analysis for the original Arora-Ge algorithm for LWE. Secondly, we study the complexity of applying algorithms for computing Gröbner basis, the fundamental tool in computational commutative algebra, to solving Arora-Ge-style systems of non-linear equations. We show positive and negative results. On the one hand, we show that the use of Gröbner bases yields an exponential speed-up over the basic Arora-Ge approach. On the other hand, we give a negative answer to the natural question whether the use of such techniques can yield a subexponential algorithm for the LWE problem. Under mild assumptions – which we experimentally verified – we show that it is highly unlikely that such an improvement exists.

We also consider a variant of LWE known as BinaryError-LWE introduced by Micciancio and Peikert recently. By combining Gröbner basis algorithms with the Arora-Ge modelling, we show – under a mild, experimentally verified assumption – that BinaryError-LWE can be solved in subexponential time as soon as the number of samples is quasi-linear, e.g. $$m=O(n\log\log n)$$. We also derive precise complexity bounds for BinaryError-LWE with $$m=O(n)$$, showing that this new approach yields better results than best currently-known generic (exact) CVP solver as soon as $$m/n ≥ 6.6$$. More generally, our results provide a good picture of the hardness degradation of BinaryError-LWE for a number of samples ranging from $$m = n(1 + \Omega(1/\log n))$$ (a case for which BinaryError-LWE is as hard as solving some lattice problem in the worst case) to $$m=O(n^2)$$ (a case for which it can be solved in polynomial-time). This addresses an open question from Micciancio and Peikert. Whilst our results do not contradict the hardness results obtained by Micciancio and Peikert, they should rule out BinaryError-LWE for many cryptographic applications.

Matei Istoan (Laboratoire CITI, Lyon).

Thursday October 2, 2014 at 10h15, LUG meeting room.

Sums of products computing just right.

Many digital filters and signal-processing transforms can be expressed as a sum of products with constants (SPC). In this presentation we will address the automatic construction of low-precision, but high accuracy SPC architectures: these architectures are specified as last-bit accurate with respect to a mathematical definition. They behave as if the computation was performed with infinite accuracy, then rounded only once to the low-precision output format. This eases the task of porting double-precision code (e.g. Matlab) to low-precision hardware or FPGA. We will also discuss the construction of the most efficient architectures obeying such a specification, introducing several architectural improvements to this purpose. This approach is demonstrated in a generic, open-source architecture generator tool built upon the FloPoCo framework. It is evaluated on Finite Impulse Response filters for the ZigBee protocol.

Victor Magron (LAAS-CNRS, Toulouse).

Thursday September 18, 2014 at 10h15, LUG meeting room.

NLCertify: A Tool for Formal Nonlinear Optimization.

NLCertify is a software package for handling formal certification of nonlinear inequalities involving transcendental multivariate functions. The tool exploits sparse semialgebraic optimization techniques with approximation methods for transcendental functions, as well as formal features. Given a box and a transcendental multivariate function as input, NLCertify provides OCaml libraries that produce nonnegativity certificates for the function over the box, which can be ultimately proved correct inside the Coq proof assistant.

Frédéric Chyzak (INRIA).

Thursday September 11, 2014 at 10h15, LUG meeting room.

L’ABC du télescopage créatif — Algorithmes, bornes, complexité.

Le télescopage créatif est un principe algorithmique développé depuis les années 1990 en combinatoire et en calcul formel, notamment depuis les travaux de Doron Zeilberger, pour calculer avec des sommes et intégrales paramétrées, que ce soit pour trouver des formes explicites ou pour justifier des identités intégrales ou sommatoires. Le procédé est particulièrement adapté à une grande famille de fonctions et suites données par des équations linéaires différentielles et de récurrences, que ce soient des fonctions spéciales de l’analyse, des suites de la combinatoire, ou des familles de polynômes orthogonaux.

Dans cet exposé, je retracerai l’évolution des algorithmes et de mes contributions pour adapter le procédé à des classes de fonctions de plus en plus générales, du cadre initial des suites hypergéométriques, données par des récurrences d’ordre 1, aux cas de fonctions données par des équations d’ordre supérieur, ceci jusqu’aux fonctions données par des idéaux non zéro-dimensionnels.

La difficulté d’obtenir des implantations rapides dans tous ces cas repose sur le calcul d’un certificat justifiant l’application du télescopage créatif, ce certificat étant par nature de grande taille. Ceci m’a motivé dans l’étude de la complexité du procédé. Plusieurs pistes d’amélioration ont été explorées, d’abord en essayant de maintenir compact ce certificat, puis en obtenant des algorithmes validés sans passer par son calcul. Comme souvent, l’estimation des tailles arithmétiques des objets intervenant dans le telescopage créatif a à la fois guidé le développement de nouveaux algorithmes plus efficaces et permis leur estimation théorique de complexité.

Pour finir, j’indiquerai brièvement la direction qu’a prise mes travaux récents sur le sujet, vers la preuve formelle, et qui font ressortir des pistes pour une meilleure justification de l’application du télescopage créatif.

Amit Deshpande (Microsoft Research India).

Tuesday September 2, 2014 at 15h15, LUG meeting room.

Sampling-based approaches to low-rank approximation and their applications.

Row/column sampling techniques were initially introduced to compute low-rank approximation of matrices faster than the computation of SVD. In this talk, I will give a quick overview of squared-length sampling, adaptive sampling, relevance-score sampling etc., survey their connections to other topics such as determinantal point processes, rounding Lasserre SDP solutions, graph sparsification, convex geometry, and discuss interesting open problems in these.

# 2013-2014

Jung Hee Cheon (Seoul National University).

Jeudi 10 juillet 2014 à 10h15 en salle de réunion LUG.

Ciphertext Compression of Asymetric Homomorphic Encryption.

Fully homomorphic encryption is an encryption scheme which allows arbirary
computations on encrypted data without decrypting. In spite of significant
improvements, FHEs are still far from practical for most applications. Especially,
large ciphertext size is a big bottleneck for transmission and storage. In this talk,
we introduce a method to reduce the ciphertext size of public key homomorphic
encryption by combining the ciphertext compression technique of symmetric homomorphic
encryption and the key switching technique.

Aurélien Greuet (LIFL, Université Lille 1).

Jeudi 3 juillet 2014 à 10h15 en salle de réunion LUG.

Optimisation polynomiale et variétés polaires : théorie, algorithmes et implantation.

Le calcul de l’infimum global f* d’un polynôme à n
variables sous contraintes est une question centrale qui apparaît dans
de nombreux domaines des sciences de l’ingénieur. Pour certaines
applications, il est important d’obtenir des résultats fiables. De
nombreuses techniques ont été développées dans le cas où les contraintes
sont données par des inéquations polynomiales.

Dans cet exposé, on se concentre sur le problème d’optimisation d’un
polynôme à n variables sous des contraintes définies par des équations
polynomiales à n variables. Le but est d’obtenir des outils,
algorithmes et implémentations efficaces et fiables pour résoudre ces
problèmes d’optimisation.

La stratégie est de ramener le problème d’optimisation sous des
contraintes qui définissent des ensembles algébriques de dimension
quelconque à un problème équivalent, sous des nouvelles contraintes dont
on maîtrise la dimension. La variété algébrique définie par ces
nouvelles contraintes est l’union du lieu critique du polynôme objectif
et d’un ensemble algébrique de dimension au plus 1. Pour cela, on
utilise des objets géométriques définis comme lieux critiques de
projections linéaires.

Grâce au bon contrôle de la dimension, on prouve l’existence de
certificats pour des bornes inférieures sur f* sur nos nouvelles
variétés. Ces certificats sont donnés par des sommes de carrés et on ne
suppose pas que f* est atteint.

De même, on utilise les propriétés de nos objets géométriques pour
concevoir un algorithme exact pour le calcul de f*. S’il existe,
l’algorithme renvoie aussi un minimiseur. Pour un problème avec s
contraintes et des polynômes de degrés au plus D, la complexité est
essentiellement cubique en (sD)n et linéaire en la complexité
d’évaluation des entrées. L’implantation, disponible sous forme de
bibliothèque Maple, reflète cette complexité. Elle a permis de résoudre
des problèmes inatteignables par les autres algorithmes exacts.

Les travaux présentés sont issus de collaborations avec Feng Guo, Mohab
Safey El Din et Lihong Zhi.

Jeudi 26 juin 2014 à 10h15 en salle de réunion LUG.

How not to generate random numbers.

Randomness is essential to cryptography: cryptographic
security depends on private keys that are unpredictable to an
attacker. But how good are the random number generators that are
actually used in practice? In this talk, I will discuss several
large-scale surveys of cryptographic deployments, including TLS, SSH,
Bitcoin, and secure smart cards, and show that random number
generation flaws are surprisingly widespread. We will see how many of
the most commonly used public key encryption and signature schemes can
fail catastrophically if used with faulty random number generators,
and trace many of the random number generation flaws we
encountered to specific implementations and vulnerable implementation
patterns.

Frédéric Chyzak (Inria).

(séminaire reporté).

L’ABC du télescopage créatif — Algorithmes, bornes, complexité.

Le télescopage créatif est un principe algorithmique développé depuis les années
1990 en combinatoire et en calcul formel, notamment depuis les travaux de Doron Zeilberger,
pour calculer avec des sommes et intégrales paramétrées, que ce soit pour trouver
des formes explicites ou pour justifier des identités intégrales ou
sommatoires. Le procédé est particulièrement adapté à une grande famille de
fonctions et suites données par des équations linéaires différentielles et de
récurrences, que ce soient des fonctions spéciales de l’analyse, des suites de
la combinatoire, ou des familles de polynômes orthogonaux.

Dans cet exposé, je retracerai l’évolution des algorithmes et de mes
contributions pour adapter le procédé à des classes de fonctions de plus en
plus générales, du cadre initial des suites hypergéométriques, données par des
récurrences d’ordre 1, aux cas de fonctions données par des équations d’ordre
supérieur, ceci jusqu’aux fonctions données par des idéaux non
zéro-dimensionnels.

La difficulté d’obtenir des implantations rapides dans tous ces cas repose sur
le calcul d’un certificat justifiant l’application du télescopage créatif, ce
certificat étant par nature de grande taille. Ceci m’a motivé dans l’étude de
la complexité du procédé. Plusieurs pistes d’amélioration ont été explorées,
d’abord en essayant de maintenir compact ce certificat, puis en obtenant des
algorithmes validés sans passer par son calcul. Comme souvent, l’estimation
des tailles arithmétiques des objets intervenant dans le telescopage créatif a
à la fois guidé le développement de nouveaux algorithmes plus efficaces et
permis leur estimation théorique de complexité.

Pour finir, j’indiquerai brièvement la direction qu’a prise mes travaux
récents sur le sujet, vers la preuve formelle, et qui font ressortir des
pistes pour une meilleure justification de l’application du télescopage
créatif.

Igor Shparlinski (The University of New South Wales, Sydney).

Jeudi 5 juin 2014 à 10h15 en salle de réunion LUG.

Noisy Polynomial Interpolation and Approximation in Finite Fields.

We consider a modification of the noisy polynomial interpolation
problem of recovering an unknown polynomial f(X) in Z[X]
from approximate values of the residues of f(t) modulo a
prime p at polynomially many points t taken from a short interval.
Our results are based on lattice algorithms and also on a variety
of some other number theoretic techniques including bounds on
the number of small values of polynomials for arguments from
short intervals. This work is motivated by a recently introduced HIMMO key
distribution scheme.

David Lutz (ARM, Austin TX).

Jeudi 15 mai 2014 à 10h15 en salle de réunion LUG.

Low Latency Is Low Energy.

For an out-of-order CPU, the multi-cycle execution units (e.g., multipliers or dividers or floating-point adders)
consume only a small fraction of the area and power used by the core. By reducing the latency of these units, we
can often complete a task early, saving large amounts of total energy. We describe a simple formula to gauge the
relationship between execution unit power, core power, execution unit latency, and overall energy required for a task.
We then use that formula to argue for lower latency execution, even if the power of the execution units has to be
increased.

David Galindo (LORIA).

Jeudi 24 avril 2014 à 10h15 en salle de réunion LUG.

Implementation and Evaluation of a Leakage-Resilient ElGamal Key Encapsulation Mechanism.

Leakage-resilient cryptography aims at extending the guarantees delivered by the provable
security paradigm to physical cryptography. The constructions provided by this new approach
persistently present an Achilles heel: a bounded leakage assumption is needed. A gap exists
hence between the promises and minimal requirements of leakage-resilient designs. Several
important recent works have contributed to close this gap. In this work we take a different
step forward in the same direction, by implementing and analyzing a leakage-resilient pairing-based
ElGamal key encapsulation mechanism (BEG-KEM) proposed by Kiltz and Pietrzak (ASIACRYPT 2010). We show,
empirically, that under the most favorable interpretation of the leakage model (namely, that every bit
leaked corresponds to a single bit – without noise – of the secret input), a naive implementation
of exponentiation in the pairing base group makes impossible to meet the leakage bound. Instead of
tackling the difficult problem of finding the right exponentiation algorithm that meets the leakage
bound, we propose a variation of BEG-KEM decryption that does not use exponentiation, but rather uses
an encoding to the base group, due to Fouque and Tibouchi (LATINCRYPT 2012). We then argue that this
new implementation is likely to meet the required leakage bound. We support this by a careful
examination of the state of the art research. To our knowledge, ours is the first implementation
in an embedded processor of a leakage-resilient primitive using public key cryptography.

Guillaume Chèze (Institut de Mathématiques de Toulouse, Université Paul Sabatier Toulouse 3).

Jeudi 17 avril 2014 à 10h15 en salle de réunion LUG.

A propos du calcul des intégrales premières rationnelles.

Le cadre de cet exposé sera l’étude des systèmes différentiels du type X’ = A(X,Y), Y’=B(X,Y),
où X’ et Y’ représentent les dérivées par rapport au temps t, et A,B sont dans Q[X,Y].

Dans ce contexte, nous souhaitons obtenir (lorsque cela est possible) une fraction rationnelle
dont les lignes de niveaux correspondent aux trajectoires solutions du système différentiel étudié.
Cela permet alors d’avoir une représentation symbolique des solutions du système différentiel.

Nous verrons des algorithmes anciens et d’autres plus récents pour résoudre ce problème.
En particulier, je présenterai un algorithme obtenu en collaboration avec A. Bostan, T. Cluzeau,
et J.-A. Weil qui permet de ramener ce problème à la résolution d’un système linéaire.

Nick Trefethen (University of Oxford).

Jeudi 10 avril 2014 à 10h15 en amphi K (rez-de-chaussée du bâtiment GN1).

Polynomial and rational approximations in Chebfun

The Chebfun project, based at Oxford University, applies ideas from polynomial and rational approximation theory to enable practical computation with functions of a real variable. This talk will survey some of the mathematics and algorithms, with a demonstration of Chebfun.

Jeudi 3 avril 2014 à 10h15 en salle de réunion LUG.

High-precision computation (… and sometimes low): Mathematical physics and dynamics.

At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most
scientific applications. However, for a rapidly growing body of important scientific computing
applications, a higher level of numeric precision is required. Such calculations are facilitated
by high-precision software packages that include high-level language translation
modules to minimize the conversion effort. On the other hand, the use of massive GPU cards
allows extremely fast calculations in low precision simulations
where high-precision is unnecessary. This talk presents an overview of recent
applications of these techniques, specially in computational dynamics, and provides
some mathematical analysis of their numerical requirements.

Bruno Salvy (Aric, LIP).

Jeudi 27 mars 2014 à 10h15 en salle de réunion LUG.

Sommes multiples de binomiaux.

Les sommes multiples de binomiaux possèdent énormément de structure, ce qui permet de développer des algorithmes
spécifiques pour prouver ou trouver des formes closes ou des récurrences. En particulier, on peut représenter ces
sommes comme diagonales de fractions rationnelles, et alors trouver des récurrences pour ces suites en calculant
une équation différentielle pour une certaine intégrale multiple de fraction rationnelle. Nous étudions cette
approche tant du point de vue de la complexité que du point de vue pratique, et la comparons aux méthodes dérivées
de l’algorithme de Zeilberger. Il s’agit d’un travail en commun avec Alin Bostan et Pierre Lairez.

Pierre-Jean Spaenlehauer (INRIA Nancy-Grand Est).

Jeudi 27 février 2014 à 10h15 en salle de réunion LUG.

A Newton-like iteration and algebraic methods for Structured Low-Rank Approximation.

Given an linear-affine space of matrices E with real entries, a data
matrix U in E and a target rank r, the Structured Low-Rank
Approximation Problem consists in computing a matrix M in E which is
close to U (with respect to the Frobenius norm) and has rank at most r.
This problem appears with different flavors in a wide range of
applications in Engineering Sciences and symbolic-numeric
computations.

We propose an SVD-based numerical iterative method which converges
locally towards such a matrix M. This iteration combines features of
the alternating projections algorithm and of Newton’s method, leading
to a proven local quadratic rate of convergence under mild
tranversality assumptions. We also present experimental results which
indicate that, for some range of parameters, this general algorithm is
competitive with numerical methods for approximate univariate GCDs and
low-rank matrix completion (which are particular instances of Structured Low-Rank
Approximation).

In a second part of the talk, we focus on the algebraic structure and
on exact methods to compute symbolically the nearest structured
low-rank matrix M to a given matrix U in E with rational entries. We
propose several techniques to recast this problem as a system of polynomial
equations in order to solve with Gröbner bases software.

The first part of the talk is a joint work with Eric Schost, the second
part is a joint work with Giorgio Ottaviani and Bernd Sturmfels.

Denis Arzelier (CNRS-LAAS, Toulouse).

Jeudi 20 février 2014 à 10h15 en salle de réunion LUG.

Linearized Fuel-Optimal Space rendezvous: Some theoretical and numerical aspects.

The optimal fuel impulsive time-fixed rendezvous problem is shortly reviewed in the context of the more general problem of impulsive optimal control. In a linear setting, it may be reformulated as a non convex polynomial optimization problem for a pre-specified fixed number of velocity increments. Some theoretical results are first recalled and different algorithms are presented. In particular, when addressing the problem of a free number of maneuvers, one has to resort to a mixed iterative algorithm involving the solution of a system of polynomial equation at each step. Different numerical issues are finally pointed out. Examples of realistic rendezvous missions will illustrate this talk.

Johan Nielsen (University of Ulm).

Jeudi 13 février 2014 à 10h15 en salle de réunion LUG.

Solving 2D Padé approximations: the what, why and hows.

We consider a type of Padé approximations over polynomial rings over fields,
approximations. These approximations occur frequently in decoding of algebraic
codes (most recently in a preprint coauthored by several members of the AriC
team).

We’ll see how such approximations can be solved using row reduction of polynomial
matrices. We take a closer look at the Mulders–Storjohann for doing this and
show that its complexity is linked to the orthogonality defect of the input
matrix, implying that it is faster than initially guessed when applied to 2D

We then present a new algorithm, which essentially carries out the computations
performed by Mulders–Storjohann in a demand-driven manner, yielding an
algorithm which is often even faster. This algorithm has a very
Berlekamp–Massey-like quality, but is derived entirely in the module language.

Thomas Pöppelmann (Ruhr-Universität Bochum).

Jeudi 23 janvier 2014 à 10h15 en salle de réunion LUG.

Practical Lattice-Based Signatures.

Novel public-key cryptosystems beyond RSA and ECC are urgently required
to ensure long-term security in the era of quantum computing. One
alternative to such established schemes is lattice-based cryptography
which offers elegant security reductions and versatile tools such as the
learning with errors (LWE) problem. However, a critical issue on the
construction and implementation of such alternative cryptosystems is to
achieve security and practicability at the same time.

In this talk we will give an overview on current research dealing with
the implementation and optimization of efficient ideal lattice-based
signatures and underlying arithmetic. Ideal lattices are used to reduce
the generally very large key sizes of standard lattice construction and
also enable operations in quasi-linear runtime due to the applicability
of the number theoretic transform (NTT). First results are promising and
show that practical ideal lattice-based signatures can achieve high
speed on platforms such as FPGAs and desktop CPUs, especially when
equipped with vector register extensions (AVX).

Sébastien Tavenas (MC2, LIP, Lyon).

Jeudi 16 janvier 2014 à 10h15 en salle de réunion LUG.

The number of real roots for a system of sparse polynomials: The Fewnomials Theory and Sevostyanov’s problem.

A non-zero degree-d polynomial has at most d complex roots. But, if we consider a “nice” high-degree polynomial, for example x2000-1,
we can notice that the number of real roots is small. This idea is known since Descartes’ work: he showed that if a polynomial has t monomials
(in the example, t=2), then the number of real roots is bounded by 2t-1 (it does not depend on the degree).

What happens now if we consider systems of polynomials? Bézout’s Theorem states that the number of non-degenerated complex roots is bounded
by the product of the degrees. Can we get a similar result for the number of real roots of a system of sparse polynomials?

For solving these problems, Khovanskií developed the Fewnomials Theory. However, the order of magnitude of the bounds is always unknown.
We will consider this theory and the results around these problems and then, we will focus on a particular case, Sevostyanov’s problem:
What is the maximal number of real solutions for a system of one sparse polynomial and of one dense polynomial?

Nicolas Mascot (Institut de Mathématiques de Bordeaux, Université Bordeaux I).

Jeudi 9 janvier 2014 à 10h15 en salle de réunion LUG.

Calcul de représentations galoisiennes modulaires.

Les formes modulaires se présentent comme des séries de Fourier dont les coefficients
sont riches d’informations (notamment arithmétiques). Nous verrons comment le calcul
d’un objet associé à chaque forme modulaire (plus précisément, une représentation
galoisienne) permet de calculer efficacement ces coefficients modulo un petit nombre premier.

Maximilian Jaroschek (RISC, Johannes Kepler University, Linz, Austria).

Jeudi 19 décembre 2013 en salle de réunion LUG.

Removable Singularities of Ore Polynomials.

Ore algebras are an algebraic structure used to model many different
kinds of functional relations like differential and recurrence equations. The
elements of an Ore algebra are polynomials for which the multiplication is
defined to be usually non-commutative. As a consequence, Gauß’ lemma
does not hold in many Ore polynomial rings and hence the product of two
primitive Ore polynomials is not necessarily primitive. This observation
leads to the distinction of non-removable and removable singularities. We
present some of the effects of these singularities on computations with Ore
operators.

The set of operators of an Ore algebra that give zero when applied to
a given function forms a left ideal. The cost of computing an element of
this ideal depends on the size of the coefficients (the degree) and the order
of the operator. We show how to derive an order-degree curve based on
the study of removable singularities. For a given Ore operator, this is a
curve in the (r, d)-plane such that for all points (r, d) above this curve,
there exists a left multiple of order r and degree d of the given operator.
When computed for the generator of an operator ideal from applications
like physics or combinatorics, the resulting bound is usually sharp.

The generator of a left ideal in an Ore polynomial ring is the greatest
common right divisor of the ideal elements, which can be computed by the
Euclidean algorithm. Polynomial remainder sequences like the subresul-
tant sequence contain the intermediate results of the Euclidean algorithm
when applied to (non-) commutative polynomials. The running time of
the algorithm is dependent on the size of the coefficients of the remainders.
We show how information about the non-removable singularities can be
used to generalize an improvement of the subresultant sequence to Ore
polynomials.

Benoit Lopez (PEQUAN team, LIP6, Paris).

Jeudi 12 décembre 2013 en salle de réunion LUG.

Optimisation of digital filter implementation in fixed-point arithmetic.

The great majority of embedded signal processing algorithms is implemented using
digital devices such as DSP or FPGA. For power consumption or area reasons, the
computation on these devices is mainly based on integer arithmetic (rather than
floating-point arithmetic). The fixed-point arithmetic is used as an approximation
of real numbers. Unfortunately the numerical implementation of such algorithms suffers
from a deterioration in performance and characteristics, due to the quantization of
the embedded coefficients and the roundoff errors occurring in the computations.
The main objective of my PhD thesis is to optimize and automate the implementation
of a given filter in fixed-point arithmetic, for these devices. In this talk i
present the main works i realized during the first two years of my PhD thesis. After
a short introduction of fixed-point numbers and digital filter processing, i introduce
the ordered-sum-of-products (oSoP) that represent the different orders for the
computation of a sum-of-products, and then i expose a current work that consists on
removing the « useless » bits of a N-term sum in fixed-point arithmetic.

Olga Kupriianova (PEQUAN team, LIP6, Paris).

Jeudi 28 novembre 2013 à 10h15 en salle de réunion LUG.

The FMA operation which computes multiplication and addition
performing with only one rounding is required by IEEE754-2008
standard. However, the norm requires only the inputs and the output
of the same radices. The full set of multiradix FP operations does
not exist for the moment, although there are already several
approaches published [1,2]. This work contains the approaches to
implement the multiradix-FMA and the algorithms for computation the
worst cases.

References:

[1] N. Brisebarre, C. Lauter, M. Mezzarobba, and J.-M. Muller, “Comparison
between binary64 and decimal64 floating-point numbers,” in Proceedings
of the 2013 IEEE 21st Symposium on Computer Arithmetic, April 2013.

[2] O. Kupriianova, Ch. Lauter, J.-M. Muller, “Radix Conversion for
IEEE754-2008 Mixed Radix Floating-Point Arithmetic,” in Procedings of
the 47th Asilomar Conference on Signals, Systems, and Computers,
November 2013.

Martin Albrecht (Technical University of Denmark, Copenhagen).

Jeudi 21 novembre 2013 à 10h15 en salle de réunion LUG.

Solving LWE with BKW.

The Learning with Errors Problem (LWE) is a generalisation of
Learning Parity with Noise (LPN) to larger moduli (and different error
distributions). LWE is at the heart of many recent advances in cryptographic
constructions such as fully homomorphic encryption which means that
establishing the concrete hardness of LWE instances is a pressing issue. Three
families of algorithms are known in the literature for solving LWE: (a)
lattice reduction, (b) reduction noise-free polynomial system solving and (c)
combinatorial algorithms. In this talk we will first revisit the most
prominent of the combinatorial algorithms – the Blum-Kalai-Wasserman (BKW)
algorithm – and connect it to more traditional linear algebra algorithms
such as PLE/CUP decomposition using Gray code tables. In a second part
we then adapt this algorithm to the special but popular case where the secret
we wish to recover is “small”.

Joint work with C. Cid, J.-C. Faugère, R. Fitzpatrick, and L. Perret.

Olivier Blazy (Université de la Ruhr à Bochum).

Jeudi 7 novembre 2013 à 10h15 en salle de réunion LUG.

Errorless Smooth Projective Hash Function based on LWE, From worst-case Assumptions up to Universal Composability.

Smooth Projective Hash Functions (SPHF) are one of the base tools to build interactive protocols; and this notion
has lead to the construction of numerous protocols enjoying strong security notions, such as the security
in the Bellare-Pointcheval-Rogaway (BPR) model or even Universal Composability (UC).

Yet, the construction of SPHF has been almost limited to discrete-logarithm or pairing type assumptions up to now.
This stands in contrast with domains such as homomorphic encryption or functional encryption, where Lattice Based
Cryptography has already catched up and overtook discrete-log/pairing based cryptography.

So far, work in the direction of UC based on lattices is almost restricted to a paper from Peikert,
Vaikuntanathan, and Waters (Crypto 2008) dealing with Oblivious Transfer in the UC framework, and work
in the direction of password-authenticated key exchange protocols (PAKE) to one from Katz and Vaikuntanathan
(Asiacrypt 2009) on a 3-round Password-Authenticated Key Exchange, but restraining itself to the BPR model.

It seems that dealing with errors in those contexts is not as easy as it is for encryption.
In this talk, I will show the source of the problem, namely, the lattice version of Diffie-Hellman key
exchange protocol: the key agreement is only approximate, and with a simple trick obtain true, errorless,
one-round key exchange from LWE. And then show that this trick can be adapted to various lattice encryption schemes,
leading, with some technicalities, to errorless SPHF’s.

From there, we derived three new results, namely the first lattice-based following protocols: a one-round PAKE
secure in the BPR model, a 3-round PAKE secure in the UC model, and a UC commitment scheme, all of them based on SIS and LWE assumptions.

This is a joint work with Céline Chevalier, Léo Ducas and Jiaxin Pan.

Marc Baboulin (Université Paris-Sud, Inria Saclay – Île-de-France).

Jeudi 17 octobre 2013 à 10h15 en salle de réunion LUG.

Using condition numbers to assess numerical quality in HPC applications.

We explain how condition numbers of problems can be used to assess the quality
of a computed solution. We illustrate our approach by considering the example of
overdetermined linear least squares (linear systems being a special case of the latter).
Our method is based on deriving exact values or estimates for the condition number
of these problems. We describe algorithms and software to compute these quantities
using standard parallel libraries. We present numerical experiments in a physical
application and we propose performance results using new routines on top of the
multicore-GPU library MAGMA.

Tancrède Lepoint (CryptoExperts – ENS Paris).

Jeudi 3 octobre 2013 à 10h15 en salle de réunion LUG.

(Fully) Homomorphic Encryption: From Theory to Practice.

Since the breakthrough of Gentry presenting the first plausible candidate
construction, the past few years have seen an intensive work on new
constructions achieving fully homomorphic encryption. However, all these
primitives have a major downside: they are order of magnitude slower than
usual public-key primitives.

In this talk, we will tackle the practicality of (fully) homomorphic
encryption, its potential and limitation. We will also discuss an inherent
problem about the noise growth in the ciphertexts, the intricate parameter
derivation process (using BKZ-2.0) and some promising recent results and
improvements.

Emmanuel Thomé (INRIA Nancy, projet CARAMEL).

Jeudi 19 septembre 2013 à 10h15 en salle de réunion LUG.

A quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic.

We present a new discrete logarithm algorithm, in the same vein as in recent works by Joux, using an asymptotically more
efficient descent approach. The main result gives a quasi-polynomial heuristic complexity for the discrete logarithm problem
in finite field of small characteristic. By quasi-polynomial, we mean a complexity of type nO(log n) where n is the
bit-size of the cardinality of the finite field. Such a complexity is smaller than any L(ε) for ε>0.
It remains super-polynomial in the size of the input, but offers a major asymptotic improvement compared to L(1/4+o(1)).

Louise Huot (LIP6).

Jeudi 12 septembre 2013 à 10h15 en salle de réunion LUG.

Polynomial systems solving by fast linear algebra.

Joint work with Jean-Charles Faugère, Pierrick Gaudry and Guénaël Renault.

Polynomial system solving is a classical problem in mathematics with a
wide range of applications. Therefore, its complexity is a central
object of study in theoretical computer science. Depending
on the context, solving has different meanings. To handle the
most general case, we consider a representation of the solutions
that allows us to easily recover the exact solutions
and/or a certified approximation.

Under some genericity assumptions, such a representation is given by the
lexicographical Gröbner basis of the system and consists of a set of
univariate polynomials. The best known algorithm for computing the
lexicographical Gröbner basis requires O(nD3) arithmetic
operations, where n is the number of variables and D is the number of
solutions of the system. We improve this bound to
Õ(Dω), where 2 ≤ ω < 2.3727 is the
exponent in the complexity of multiplying two dense matrices and the
Õ notation means that we neglect logarithmic factors. To
achieve this result we propose new algorithms that rely on fast
linear algebra. When the degree of the equations is bounded we
propose a deterministic algorithm. In the unbounded case we present a
Las Vegas algorithm.

# 2012-2013

Jérôme Ducoat (Nanyang Tech. Univ., Singapour)

Jeudi 18 juillet à 10h15 en salle de réunion LUG:

On the Design of Wiretap Codes

Wiretap codes are families of codes that promise
both reliability between the legitimate users Alice and Bob, and
confidentiality in the
presence of an eavesdropper Eve. We survey an error probability point
of view to the design of wiretap codes for several continuous
channels, and discuss the resulting code design criteria. They give
rise to a serie of interesting problems,
such as the understanding of a new lattice invariant, or the search
for number fields with particular discriminant and regulator.

Jean-Claude Belfiore (Télécom Paris Tech)

Jeudi 11 juillet à 10h15 en salle de réunion LUG:

Some problems of communications involving lattices

Lattices have become an important tool to construct
codes for communication systems impaired by Gaussian noise. We first
propose to review the construction of Euclidean codes for the Gaussian
channel, based on Lattices. After that, we present lattices endowed
with a multiplicative structure that have become very important in the
case of wireless fading channels and wireless Multiple-Antenna
channels. These lattices are in fact ideal lattices coming from either
the canonical embedding of a number field or from the matrix
representation of a central division algebra.

Other communications problems involving lattice codes will be very
briefly presented to show the importance of that tool.

Stef Graillat (LIP6)

Jeudi 4 juillet à 10h15 en salle de réunion LUG:

Accurate evaluation of the k-th derivative of a
polynomial and its application to the convergence of Newton’s method.

In this talk we will present a compensated
algorithm for the evaluation of the k-th derivative  of a polynomial
in power basis. The proposed algorithm makes it possible the direct
evaluation without obtaining the k-th derivative expression of the
polynomial itself, with a very accurate result to all but the most
ill-conditioned evaluation. Forward error  analysis and running error
analysis are performed by an approach based on the data  dependency
graph. Numerical experiments illustrate the accuracy and efficiency of
the  algorithm. We will use this algorithm to improve the accuracy of
Newton’s method for simple roots that are ill-conditioned.

Bogdan Pasca (Altera)

Jeudi 20 juin à 10h15 en salle de réunion LUG:

Challenges and solutions for efficient arithmetic IP
design on modern FPGAs.

The adoption of FPGAs as mainstream accelerators
requires the existence of efficient high-level design tools (DSP
Builder Advanced, Altera OpenCL SDK), efficient arithmetic libraries
and an easy/automatic integration (handled automatically by OpenCL
SDK) with the host operating system. Building the mathematical library
for OpenCL (as specified by the Khronos Group) requires the
implementation approximately 70 floating-point functions. In this talk
I will discuss the way we have implemented some of these functions,
focusing on division, tangent and arctangent functions. I will also
cover macro operators, presenting an FPGA-specific implementation of a
fused floating-point polynomial evaluator. Last, I will present a
low-cost implementation of a multiplicative-based FPU targeting
industrial applications.

Jean-Guillaume Dumas (LIG)

Jeudi 13 juin à 10h15 en salle de réunion LUG:

Approches creuses pour la distribution de motifs dans

Nous présentons des calculs de distribution de
motifs dans de longues séquences générées par des sources
markoviennes.

Notre approche tient compte de l’aspect compact du motif considéré par
rapport à la taille de la séquence. Nous présenterons notamment une
approche par expansion de Taylor rapide d’une reconstruction
rationnelle bivariée de la distribution.

L’intérêt de notre approche sera illustré sur deux applications
biologiques : les facteurs de transcription du chromosome humain n°5
et la signature PROSITE de motifs fonctionnels de protéines.Travail en commun avec Grégory Nuel, CNRS-MAP5.

Mioara Joldes (LAAS)

Jeudi 30 mai à 10h15 en salle de réunion LUG:

Searching for sinks of Henon map using a
multiple-precision GPU arithmetic library.

GPUs represent nowadays an important development
hardware platform for many scientific computing applications that
demand massive parallel computations, but currently GPU-tuned multiple
precision arithmetic libraries are scarce. We develop a
multiple-precision floating-point arithmetic library using the CUDA
programming language for the NVidia GPU platform.  In our case, we are
currently using this library for locating invariant sets for chaotic
dynamical systems. I will detail the numerical study, GPU
implementation and a posteriori validation of existence of stable
periodic orbits (sinks) for the Henon map for parameter values close
to the canonical ones. This is a joint work with V. Popescu and W.
Tucker.

Léo Ducas (LIENS)

Jeudi 23 mai à 14h00 en salle de réunion LUG (attention ce n’est pas l’heure habituelle!):

Sampling Discrete Gaussians: the Bottleneck of Lattice
Based Cryptography.

Lattice based cryptography has many attributes
compared to Elliptic Curves or Factorization based constructions. On
the theoretical side, hardness of underlying problem is better
understood, it is still resistant to quantum algorithms, and some new
constructionx were unlocked, for example with the breakthrough of
Fully Homomorphic Encryption. On the practical side, it seems well
adapted to small and parallel architectures, most operation being
linear algebra with a very small modulus.However, early schemes schemes such as NTRUSign were broken by
statistical attack, the reason being that Babaï’s algorithms behavior
reflect the geometry of the secret keys. In 2008, Gentry et al. shows
that it is possible to prevent such statistical leakage by using
Discrete Gaussian distribution. However, Sampling such distribution is
quite slow in practice; in particular it requires computations with
real numbers at high precision.

This presentation will focus on the algorithmic and implementation
aspects of Discrete Gaussian, and cryptographic applications. Our
first topic will be the analysis of existing algorithms, and
improvements using laziness techniques, shrinking the overall
complexity from quadratic to quasi-linear. Then we will focus on a
less general case of Gaussian Sampling, for which we can tailor
bit-level algorithms that requires no floating point arithmetic at
all. Those last algorithms are part of a new signature scheme, BLISS,
that outperforms RSA signatures by a speed factor more than 10.

Arnaud Tisserand (CNRS/projet CAIRN, IRISA, Lannion)

Jeudi 25 avril à 10h15 en salle de réunion LUG:

Contre-mesures arithmétiques pour la protection de
crypto-processeurs ECC contre certaines attaques par observation.

Je ferai un petit tour des différentes techniques
arithmétiques que nous utilisons actuellement pour la protection
contre des attaques par mesure de la consommation d’énergie, du
rayonnement électromagnétique et/ou du temps de calcul.

Alin Bostan (projet SpecFun, INRIA Saclay)

Jeudi 14 mars à 10h15 en salle de réunion LUG:

Computer Algebra for Lattice Path Combinatorics

Classifying lattice walks in restricted lattices is
an important problem in enumerative combinatorics. Recently, computer
algebra methods have been used to explore and solve a number of
difficult questions related to lattice walks. In this talk, we will
give an overview of recent results on structural properties and
explicit formulas for generating functions of walks in the quarter
plane, with an emphasis on the algorithmic methodology.

Mehdi Tibouchi (NTT, Tokyo)

Mercredi 6 mars à 10h15 en salle de réunion LUG (attention: ce n’est pas le
jour habituel)

Les signatures préservant la structure pour les
groupes bilinéaires symétriques ne peuvent avoir une unique équation
de vérification

Un schéma de signatures « préservant la structure »
(SPS) est un schéma de signature numérique à clef publique dans lequel
la clef publique, les messages et les signatures sont tous des
n-uplets d’éléments de groupe bilinéaire, et la vérification de
signature s’obtient en évaluant des produits de couplages. Cette
primitive a de nombreuses applications à la construction de protocoles
cryptographiques. Sur les groupes bilinéaires asymétriques (couplages
de type III), on connaît des schémas de SPS optimaux : on sait montrer
qu’un nombre minimal d’équations de vérification et une taille
minimale de signature sont nécessaires pour qu’un tel schéma soit
(prouvé) sûr, et l’on dispose par ailleurs de constructions atteignant
cette borne inférieure. En revanche, les constructions de SPS sur les
groupes bilinéaires symétriques (couplages de type I) sont beaucoup
moins bien comprises. Dans cet exposé, nous présentons la première
borne inférieure pour les SPS sur un groupe bilinéaire symétrique, en
établissant qu’un tel schéma avec une seule équation de vérification
ne peut pas être sûr. Pour cela, nous ramenons le problème d’obtenir
une attaque par message connu sur un tel schéma à la recherche d’une
courbe rationnelle dans l’intersection d’une famille explicite de
quadriques en dimension 10, que nous effectuons par un calcul de base
de Gröbner. La même méthode permet en outre d’établir une borne
inférieure analogue pour les signatures « one-time » préservant la
structure, ce qui fournit un résultat de séparation entre groupes
bilinéaires symétriques et asymétriques. Travail commun avec Masayuki
Abe (NTT, Japon) et Miyako Ohkubo (NICT, Japon).

Matthias Messner (HIEPACS/Inria Bordeaux)

Jeudi 21 février à 10h15 en salle de réunion LUG:

Improving the Running Time of a Chebyshev
Interpolation based Fast Multipole Method

A fast multipole method (FMM) for asymptotically
smooth kernel functions (1/r, 1/r4, Gauss and Stokes kernels, radial
basis functions, etc.) based on a Chebyshev interpolation scheme has
been introduced in [1]. The method has been extended to oscillatory
kernels (e.g., Helmholtz kernel) in [2]. We present optimizations of
the multipole-to-local (M2L) operator (the costliest FMM operator) for
these formulations and address both potential uses of the FMM: (1) a
single matrix-vector product requires a quick precomputation, and (2)
many matrix-vector products (iterative solution of a system of linear
equations ) require a quick application of the M2L oper- ator. First,
we propose an improvement of the optimizations introduced in [1, 2]:
an individual recompression of the M2L operators leads to a provably
smallest computational cost among the class of dense M2L operators.
The second bottleneck of the existing optimizations is their
precomputation time. We introduce a new set of optimizations that
exploit all symmetries in the arrangement of the far-field
interactions. This allows us to express all M2L operators as per-
mutations of a small subset of operators only. Besides drastically
reducing precomputation time and memory requirement, this approach
also paves the road for various algorithmic improve- ments, such as a
more efficient use of processor memory cache through the use of
matrix-matrix products instead of matrix-vector products. We conclude
the talk with numerical benchmarks of the optimizations of the M2L
operator and an in depth analysis of the outperforming variants.

References

[1] W. Fong and E. Darve. The black-box fast multipole method. J. of
Comput. Phys., 228(23):8712–8725, 2009.

[2] M. Messner, M. Schanz, and E. Darve. Fast directional multilevel
summation for oscillatory kernels based on chebyshev interpolation. J.
of Comput. Phys., 231(4):1175 – 1196, 2012.

Javier Segura (Universidad de Cantabria, Santander,
Spain)

Jeudi 31 janvier en salle 116:

Computation and inversion of cumulative chi-squared
distributions

Both the direct computation and the inversion of
the cumulative $\chi^2$ distributions (also called gamma
distributions) are used in many problems in statistics, applied
probability and  engineering. Algorithms for the numerical evaluation
and inversion of the incomplete gamma function ratios  (central
$\chi^2$ distributions) $P(a,x)=\gamma(a,x)/\Gamma(a)$ and
$Q(a,x)=\Gamma(a,x)/\Gamma(a)$  are described for positive values of
$a$ and $x$. The method for the computation of the ratios $P(a,x)$ and
$Q(a,x)$ use different approximations (Taylor series, continued
fraction and uniform asymptotic expansions) in different regions of
the parameters and it is similar to Gautschi’s method but with the
addition of uniform asymptotic expansions for large values of $a$ and
$x$. Non-central gamma distribution functions (also called Marcum
functions) are functions of three variables $P_{\mu}(x,y)$,
$Q_{\mu}(x,y)$ which reduce to  central gamma distributions when
$x=0$. We describe algorithms for the computation and inversion of the
non-central cumulative distributions based on series expansions,
integral representations,   asymptotic expansions and the use of
three-term homogeneous recurrence relations.

Pierre Lairez (Inria Saclay)

Jeudi 24 janvier en salle 116:

Calcul des périodes des intégrales multiples

Les intégrales multiples de fractions rationnelles
dépendant de paramètres satisfont des équations différentielles. Je
détaillerai quelques applications de ces équations en arithmétique et
en combinatoire. La question qui se pose est alors de calculer ces
équations. Je présenterai un algorithme récent que nous avons
développé avec Alin Bostan et Bruno Salvy à partir d’une méthode due à
Griffiths et Dwork. D’un point de vue concret, les calculs se ramènent
à de la manipulation de matrices à coefficients polynomiaux. D’un
point de vue plus théorique en calcul formel, nous obtenons de
nouvelles bornes sur la taille de ces équations différentielles, ainsi
qu’une complexité améliorant de manière importante ce qui était connu.
Un des facteurs de cette amélioration est que cet algorithme, pour la
première fois, évite le calcul d’un certificat, qui permet a
posteriori la vérification du calcul mais dont la taille peut être
bien plus importante que celle de l’équation désirée.

Cyril Bouvier (LORIA/Caramel)

Jeudi 17 janvier en salle 116:

ECM sur GPU

In this talk, I will present a new implementation of
the Elliptic Curve Method algorithm (ECM) for graphic cards (GPU).
This implementation uses

Montgomery’s curve, like GMP-ECM, but uses a different implementation
and a different algorithm for the scalar multiplication. As there is
no modular arithmetic library (like GMP) available for GPU, it was
necessary to implement a modular arithmetic using array of unsigned
integers from scratch, while taking into account constraints of GPU
programming. The code, written for NVIDIA GPUs using CUDA, was
optimized for factoring 1024 bits integers.

Ron Steinfeld (Monash University)

Mercredi 16 janvier à 10h15 en salle 116 (attention,
ce n’est pas le jour habituel)
:

Secure multiparty computation from graph coloring

A secure multiparty computation protocol allows a
set of parties, each holding a private input, to jointly compute a
function of these private inputs, while preserving the secrecy of the
private inputs. Such protocols have numerous applications in settings
where mutually distrusting parties need to collaborate in a
computation (e.g. auctions). The classical approach to constructing
such protocols for general functions (in the computationally
unbounded, passive attack model), reduces the task to a computation
over a finite field.In this talk, we introduce the topic of secure multiparty computation,
and then present an alternative to the classical approach to
constructing secure multiparty protocols for general functions, that
reduces the task to a computation over a non-Abelian group. We then
show how to implement such protocols over black-box groups via a

reduction to a graph colouring problem, and present some constructions
for such colourings.

This talk is based on joint works with (subsets of) Yvo Desmedt, Josef
Pieprzyk, Xiaoming Sun, Christophe Tartary, Huaxiong Wang and Andrew
Chi-Chih Yao.

Romain Lebreton (LIRMM)

Jeudi 20 décembre en salle 116:

Relaxed algorithms, p-adic lifting and polynomial
system solving

lifting by relaxed algorithms. In a first part, we introduce relaxed
algorithms and their application to the computation of recursive
systems of equations, one has to transform the given implicit
equations to recursive ones. We detail the process of equations
transformation for linear equations, possibly differential, and also
for the lifting of resolutions of polynomial systems. In all cases
cases, these new relaxed algorithms are compared to existing
algorithms, both in theory and practice. This talk sums up different
works in collaboration with J. Berthomieu, A. Bostan, M. Chowdhurry,
B. Salvy and É. Schost.

Vicenzo Innocente and Danilo Piparo (CERN)

Jeudi 6 décembre en salle 116:

Computational challanges in Experimental High Energy
Physics

Data analysis in High Energy Physics, from
acquisition to publication, is computational intensive. We will
describe the process that transforms raw data in physics results at
LHC with focus on computational challanges in terms of data volume,
speed, and numerical accuracy. This process involves many different
types of software appllications, ranging from simple signal processing
to multivariate data analysis, with very different requirements on the
precision and accuracy of numerical computations. In particular the
need of both high precision numerical libraries as well as of faster
algorithm with tunable precision will be underlined.

Paul Zimmermann (LORIA/Caramel)

Jeudi 15 novembre à 14h en salle du conseil du LIP (attention,
ce n’est ni l’heure ni la salle habituelle)
:

Sélection polynomiale non linéaire pour NFS (travail en
commun avec Thomas Prest et Nathan Lecoanet).

Résumé : la sélection polynomiale pour factoriser
un entier N via le crible algébrique (Number Field Sieve ou NFS en
anglais) consiste à trouver deux polynômes f(x) et g(x) à coefficients
entiers ayant une racine commune m modulo N. Autrement dit, on veut
Res(f(x),g(x)) = k*N, avec k un entier aussi petit que possible (k=1
ou k=-1 sont optimaux). Les meilleurs algorithmes connus utilisent un
polynôme g(x) linéaire (de la forme p*x-q). Par exemple pour la
factorisation de RSA768 on a utilisé f(x) de degré 6 et g(x) de degré
1. Or des expériences montrent que si on savait trouver des paires
optimales de polynômes non-linéaires (par exemple de degrés 5 et 2) on
pourrait accélérer la phase de crible, qui est la plus coûteuse (1500
années CPU pour RSA768). L’exposé décrira l’état de l’art en ce qui
concerne la sélection polynomiale non linéaire, et donnera quelques
pistes possibles pour trouver des polynômes optimaux.

Philippe Théveny (LIP/Aric)

Jeudi 25 octobre en salle 115 (attention, ce
n’est pas la salle habituelle)
:

Interval matrix multiplication algorithms:
Implementation issues and accuracy comparison.

More than ten years ago, S. Rump proposed to use
midrad representation when multiplying interval matrices. With this
representation, interval matrix products can be performed through
successive calls to point matrix product functions of an optimized
BLAS library. In 2011, H.D. Nguyen and Rump improved the original
algorithm from the points of view of both efficency and accuracy. This
talk will deal with the implementation issues of these algorithms,
especially issues of parallelization targeted on multicores. The
differences in accurracy will also be presented through the
theoretical error analysis and numerical experiments.

Guillaume Quintin (LIX)

Jeudi 18 octobre en salle 116

Implantations d’algorithmes de décodage, de Mathemagix
à une librarie dédiée.

Un code correcteur d’erreurs de distance minimale d
peut corriger sans ambiguïté, en théorie, jusqu’à (d – 1) / 2 erreurs.
Cette borne naturelle n’est quasiment jamais atteinte en temps
polynomial. Il existe néanmoins une famille de codes dont on connait
un algorithme polynomial permettant de corriger plus de (d – 1) / 2
erreurs. Or il n’existe pas d’implantation dédiée à cet algorithme. Je
commencerai par faire quelques rappels rapides sur les codes
correcteurs d’erreurs et leurs algorithmes de décodage. Ensuite je
vais présenter brièvement le système Mathemagix et certains de ses
paquets, pertinents pour les problèmes de corrections d’erreurs. Je
donnerai quelques détails sur le mécanisme des variantes qui utilise
fortement les templates de C++. J’exposerai les conséquences de ce
mécanisme qui permet en particulier d’écrire des algorithms de manière
générique. Enfin j’expliquerai pourquoi j’ai commencé l’écriture d’une
librairie dédiée aux algorithmes de décodage en C et non en C++. Je
donnerai quelques détails d’implantation pour expliquer les choix que
j’ai fait pour avoir une certaine généricité au niveau de l’alphabet
des codes correcteurs.

Pierre-Jean Spaenlehauer (LIP6, Paris)

Jeudi 11 octobre en salle 116:

Résolution de systèmes quadratiques booléens et
applications en cryptologie

Dans un premier temps, je présenterai succinctement
les principales problématiques et contributions de mes travaux de
thèse sur certaines familles de systèmes polynomiaux structurées
(systèmes déterminantiels, systèmes multi-homogènes, systèmes points
critiques). Leur lien avec différents domaines applicatifs
(cryptologie, géométrie réelle, problèmes de minimisation) sera
également évoqué.Je parlerai ensuite plus précisément des systèmes quadratiques
booléens, qui apparaissent fréquemment en cryptologie. Nous donnons un
algorithme probabiliste (de type Las Vegas) qui combine recherche
exhaustive et algèbre linéaire creuse sur les matrices de Macaulay
afin d’en trouver les solutions avec une complexité bornée par
O(2^0.792n) sous des hypothèses algébriques précises. Une variante
déterministe (de complexité O(2^0.841n) est également proposée. Des
expériences sur des systèmes aléatoires montrent que ces hypothèses
sont vérifiées avec une probabilité proche de 1. Travail commun avec
Magali Bardet, Jean-Charles Faugère et Bruno Salvy.

# 2011-2012

Benoît Libert (Université Catholique de Louvain)

Jeudi 5 juillet 10h15-11h15 en Amphi A (attention,
ce n’est pas le lieu habituel):

Zero-knowledge databases with short proofs and
extensions

Commitment schemes are a cryptographic primitive
that can be seen as the digital equivalent of a safe or a sealed
envelope. Namely, the content remains secret until the envelope is
opened. At  the same time, everyone is convinced that the sender has
defined a unique message and cannot change his mind about it before
the opening phase.

In 2003, Micali, Rabin and Kilian (MRK) defined a primitive, called
zero-knowledge sets (ZKS), which allows a prover to commit to a secret
set S – or even an elementary database – so as to be able to prove
statements of the form « element x belongs to S » or « element x does
not belong to S » without revealing anything else, not even the size
of S.

Chase et al. (Eurocrypt 2005) showed that ZKS protocols are underlain
by a cryptographic primitive called mercurial commitment. A mercurial
commitment scheme has two commitment procedures. At committing time,
the sender can choose not to commit to a specific message and rather
generate a dummy value which it will be able to softly open to any
message without being able to completely open it. Hard commitments, on
the other hand, can be hardly or softly opened to only one message. At
Eurocrypt 2008, Catalano, Fiore and Messina (CFM) introduced an
extension called trapdoor q-mercurial commitment (qTMC), which allows
committing to a vector of q messages. These qTMC schemes are
interesting since their openings w.r.t. specific vector positions can
be short (ideally, the opening length should not depend on q), which
provides zero-knowledge sets with much shorter proofs when such a
commitment is combined with a Merkle tree of arity q. The CFM
construction notably features short proofs of non-membership as it
makes use of a qTMC scheme with short soft openings. However, hard
openings still have size O(q), which prevents proofs of membership
from being as compact as those of non-membership. In this talk, we
present a solution to this problem and describe a qTMC scheme where
hard and soft position-wise openings, both, have constant size. For
suitable parameters, our system allows compressing the proofs of the
MRK solution to 13% of their initial length. We also show that our
scheme is amenable to constructing independent ZKS schemes, which
prevent cheating provers from correlating their set to the sets of
honest provers.

Ioana Pasca (LIP/Aric)

Jeudi 28 juin 10h15-11h15 en salle 117 :

Formal Proofs for Taylor Models in Coq

The aim of this talk is to give an idea of how to
formally certify Taylor models in the Coq proof assistant. The concept
of Taylor model for a real function is an instance of a more general
concept, a rigorous polynomial approximation (RPA) for that function.
A RPA is defined as a pair formed of a polynomial and of an interval
which contains the approximation error between the polynomial and the
function. In the case of Taylor models for elementary functions the
polynomial is the Taylor polynomial and the error interval is given by
the Lagrange remainder. For composite functions we introduce an
arithmetic on Taylor models.We implemented an abstract interface for RPAs in Coq. We then provide
an abstract interface for Taylor models as a special kind of RPAs. We
prove the correctness of our Taylor models at the abstract interface
level, therefore every instance of this interface will benefit from
the proofs. The difficulty in doing the proofs at this level is to
find the minimal assumptions on the abstract structures that are
needed to ensure correctness and that are verified by the concrete
structures.

In our implementation we use a uniform approach to compute the Taylor
models of elementary functions: we use recurrence relations to compute
the Taylor polynomial as well as the remainder. All functions we are
interested in satisfy recurrence relations of order 1 or of order 2.
For each kind of these recurrences we provide one generic proof that
can be then specialized to the desired function. We also provide
proofs of correctness for Taylor models of composite functions.

Besides the specific proofs for Taylor models, we will discuss some
general techniques that are used in the formal certification of
computation inside proof assistants.

Anja Becker (PRISM, Univ. Versailles St Quentin)

Jeudi 21 juin 10h15-11h15 en salle 117 :

How to improve information set decoding exploiting that
1+1 equals 0 over F_2.

At Eurocrypt 2010, Howgrave-Graham and Joux
described an algorithm for solving hard knapsacks of n elements and
density close to 1 using a new technique that we call representation
technique. This improved a 30-year old algorithm by Shamir and
Schroeppel. In this talk we will present the following: Using the
simple observation that the solution to the knapsack problem, a binary
vector, can be represented by two overlapping vectors with
coefficients in {-1,0,1}, we can improve on the above result.
Furthermore, the technique can be applied to improve information set
decoding attacking linear codes such as used in McEliece public key
encryption. We can improve recent results on half-distance decoding
such as Ball-collision decoding and May-Meurer-Thomae–ISD of
asymptotic time complexity. Previous attempts split the solution
vector in disjoint

parts. We search the solution as decomposition of two overlapping
vectors which permits to reduce the running time in the asymptotic
case.

The talk is based on the publications:

http://eprint.iacr.org/2011/474

http://eprint.iacr.org/2012/026

a joined work with Jean-Sébastien Coron, Antoine Joux, Alexander
Meurer and Alexander May.

Vincent Lefèvre (LIP/AriC)

Jeudi 14 juin 10h15-11h15 en salle 117 :

Titre à venir

Résumé à venir

Thibault Hilaire (LIP6, Univ. P. et M. Curie)

Jeudi 7 juin 10h15-11h15 en salle 117 :

From filter/controller to code : the optimal
implementation problem

The implementation of signal processing algorithms
(or control algorithms) in embedded digital devices is a difficult,
error-prone and time-consuming task, essentially due to the finite
precision problems. The fixed-point arithmetic used in that context
implies a deterioration in performance and characteristics of the
filters/controllers, due to the quantization of the embedded
coefficients and the roundoff errors occurring in the computations.

We propose in this presentation to detail the “optimal implementation
problem” and to list the various possible actions required to deal
with the tradeoff  precision/performance/implementation cost.  A
global method to transform real filters/controllers to fixed-point
implementations will be presented, and compared to the floating-point
function evaluation problem.

Claude-Pierre Jeannerod (LIP/Aric)

Jeudi 31 mai 10h15-11h15 en salle 117 :

On Kahan’s algorithm for the accurate computation of 2
by 2 determinants.

Determinants of 2 by 2 matrices are basic numerical
kernels used for example for implementing complex arithmetic or
geometric predicates, and whose naive evaluation in floating point can
suffer severe cancellations and eventually produce inaccurate answers
Kahan gave an algorithm that is known to reduce the effect of such
cancellations. In this talk we will see that Kahan’s algorithm is in
fact *always* highly accurate. Specifically, we shall present a
complete rounding error analysis of this algorithm and establish, for
radix r and unit roundoff u, that its absolute error is at most
(r+1)/2 ulps of the exact result, that its relative error is at most
2*u, and that these bounds are essentially optimal.This is joint work with Nicolas Louvet and Jean-Michel Muller.

Assia Mahboubi (INRIA Saclay et LIX)

Jeudi 24 mai 10h15-11h15 en salle 117 :

Calcul formel et preuves formelles, en Coq

Les systèmes de calcul formel et de preuves
formelles sont deux familles de logiciels qui ont vocation à utiliser
un ordinateur pour faire des mathématiques. Si le calcul formel
s’intéresse principalement à l’algorithmique

et à l’efficacité des calculs, la preuve formelle s’attache à obtenir
une très haute garantie de la correction des énoncés formalisés. Une
des spécificités du système Coq est la place que la notion de calcul
occupe dans le formalisme logique sous-jacent à cet assistant à la
preuve. Une conséquence en est que l’on dispose dans ce système d’un
véritable langage de programmation fonctionnel, qui peut être utilisé
pour modéliser mais aussi pour exécuter (relativement efficacement)
des programmes, garantissant ainsi la correction des valeurs
calculées. Cette utilisation du calcul, combinée avec le développement
récent d’un nombre substantiel de bibliothèques de mathématiques
formalisées, est particulièrement propice à l’implémentation et à la
certification en Coq d’algorithmes issus du calcul formel. Dans ce
exposé, nous montrerons quelques expériences autour du mariage entre
calcul formel et preuves formelles, qui s’efforcent d’exploiter au
mieux les avantages respectifs de ces deux modes de programmation en
les faisant coopérer.

David Saunders (Univ. Delaware, USA)

Jeudi 10 mai 10h15-11h15 en salle 117 :

15 years of LinBox

LinBox is a C++ software library for high
performance exact linear algebra (dense, sparse, and structured
matrices over integers, finite fields).  It is developed now within
the HPAC project. I will briefly review the history and central ideas
of LinBox then discuss several current opportunities.  The central
ideas include blackbox algorithms (iterative methods in surprising
places), efficient dense matrix operations (use of numeric BLAS for
exact results), and generic design for effective combination of
general and specialized algorithms.  As I am a visitor to LIP until
July, I will then emphasize several areas where collaborations might
be possible either to advance LinBox capabilities or exploit current
ones.  Current opportunities include several in the area of
symbolic/numeric hybrid computation and blackbox/dense combinations.

Christiane Frougny (Liafa)

Jeudi 3 mai 10h15-11h15 en salle 117 :

Alphabets minimaux pour addition en parallèle

Il est bien connu que l’addition dans un système de
numération standard a une complexité linéaire, ce qui est trop lent
pour faire un processeur arithmétique efficace. En 1961, Avizienis a
donné un algorithme parallèle d’addition en base 10 avec les chiffres
{-6, -5, …, 5, 6} où la retenue ne se propage pas. On obtient ainsi
une complexité (en parallèle) constante. Dans le langage de la
dynamique symbolique, une telle addition est une fonction locale
(sliding block code). Cette technique est utilisée pour accélérer les
algorithmes de multiplication et de division.Dans ce travail nous nous intéressons à des systèmes de numération où
la base est un nombre algébrique beta, |beta|>1. Quand tous les
conjugués algébriques de beta ont un module différent de 1, on peut
trouver un alphabet symétrique de chiffres signés entiers sur lequel
faire l’addition en parallèle. Cet algorithme est une généralisation
de celui d’Avizienis.

L’algorithme d’addition est simple, mais l’alphabet obtenu est en
général plus grand que nécessaire. Nous donnons des bornes inférieures
sur la cardinalité minimale d’un alphabet permettant de faire
l’addition en parallèle. Nous présentons ensuite une série de cas où
la borne est atteinte.

Travail en collaboration avec Edita Pelantová et Milena Svobodová
(CTU, Prague)

David Thomas

Jeudi 26 avril en salle 117:

Notions of quality and accuracy for FPGA accelerated
Monte-Carlo applications

Monte-Carlo simulations have slow convergence and
so require ridiculous numbers of random trials in order to provide an
accurate answer, but for many high-dimensional problems they are the
only numerical tool available. Fortunately Monte-Carlo simulations are
also embarassingly parallel, so there is huge scope for acceleration
using devices such as GPUs and FPGAs, particularly if one is willing
to take advantage of low-precision number formats. However, this
becomes dangerous: how can one guarantee (or even define) quality and
accuracy for a massively parallel simulation utilising many trillions
of independent low-precision random trials?This talk will explore two specific areas of concern when using
hardware-accelerated Monte-Carlo: are the sources of uniform random
bits “random” enough, and do the process which shape them into
non-uniform random samples actually produce the correct distribution?
The first part will examine the use of GF2-based linear recurrences
for generating random bits, and question whether traditional software
notions of lattice-based randomness make sense for hardware. The
second part will consider the role of accuracy and ULP-ness in
statistical code, showing that (sometimes) a cheap low-precision table
lookup is more accurate than a traditional high-precision
function-approximation with rounding guarantees.

Mohab Safey el Din

Vendredi 20 avril en amphi B:

Systèmes polynomiaux et solutions réelles :
algorithmes, complexité, implantations et applications.

De nombreux problèmes applicatifs se modélisent par
des systèmes polynomiaux. Dans de nombreuses situations, des
informations sur les solutions réelles de ces systèmes sont attendues.
Dans cet exposé, on se concentrera sur 3 spécifications importantes :
le test d’existence de solutions et le calcul de certificats
algébriques de positivité, l’élimination d’un bloc de quantificateurs
et les tests de connexité.L’enjeu est d’obtenir des algorithmes efficaces tant en pratique qu’en
théorie permettant de traiter ces problèmes. On survolera les récentes
avancées sur ces 3 sujets, en mettant l’accent sur les idées le plus
souvent intuitives et géométriques qui ont permis d’aboutir. Une
attention particulière sera aussi apportée aux interactions entre
calcul formel et numérique dans ce contexte.

Cet exposé est construit sur la base de travaux communs avec B. Bank,
S. Basu, J.-C. Faugère, M. Giusti, A. Greuet, J. Heintz, M.-F. Roy, E.
Schost et L. Zhi.

George Labahn (Cheriton School of Computer Science,

Mardi 3 avril 14h en amphi I (RdC, GN1 Nord) :

Fast Computation of Minimal Nullspace Bases

In this talk I describe a new deterministic
algorithm for the computation of a minimal nullspace basis of an
$m\times n$ input matrix of univariate polynomials over a field
$\mathbb{K}$ with $m\le n$. This algorithm computes a minimal
nullspace basis of a degree $d$ input matrix with a cost of
$O^{\sim}\left(n^{\omega}\left\lceil md/n\right\rceil \right)$ field
operations in $\mathbb{K}$. The same algorithm also works in the more
general situation on computing a shifted minimal nullspace basis, with
a given degree shift $\vec{s}\in\mathbb{Z}^{n}$ whose entries bound
the corresponding column degrees of the input matrix. In this case a
$\vec{s}$-minimal right nullspace basis can be computed with a cost of
$O^{\sim}(n^{\omega}\rho/m)$ field operations where $\rho$ is the sum
of the $m$ largest entries of $\vec{s}$.This is joint work with Wei Zhou and Arne Storjohann of Waterloo.

Erik Martin-Dorel (LIP/Aric)

Jeudi 29 mars 10h15-11h15 en salle B1 :

Rigorous polynomial approximation: Taylor models inside
the Coq proof assistant

One  of the  most common  and practical  ways of
representing  a real function on  machines is by  using a polynomial
approximation.   It is then  important to  properly handle  the error
introduced by  such an approximation.    A  typical  way   to
provide   rigorous  polynomial approximations relies  on the concept
of Taylor  model, which consists of a pair combining a Taylor
approximation polynomial with an interval remainder that bounds the
approximation errors.Our goal  is to provide  an efficient implementation of  Taylor models
with formally  guaranteed error bounds.   A key requirement was  to be
able to perform all the  computations at stake inside the formal proof
checker.  After  a brief introduction  to the Coq proof  assistant, we
will thus  present how we  carry out this  work in Coq with  a special
focus on  genericity and efficiency  for our implementation.   On some
examples  inspired  by  the   TaMaDi  project,  we  will  compare  the
performances of  our implementation  in Coq with  those of  the Sollya
tool, which contains an implementation of Taylor models written in C.

The  talk is  based on  a joint  work with  N. Brisebarre,  M. Joldes,
M. Mayero, J.-M. Muller, I. Pasca, L. Rideau and L. Théry that will be
presented at NASA Formal Methods 2012.

Key-Words: Coq proof assistant, rigorous polynomial approximation,
guaranteed error bounds, Taylor models, interval arithmetic.

Jérémie Detrey (Inria Lorraine/Loria)

Jeudi 22 mars 10h15-11h15 en salle 117 :

Recherche automatique de formules pour calculer des
formes bilinéaires

Dans cet exposé, nous nous intéressons au problème
du rang bilinéaire : étant donnée une application bilinéaire (par
exemple, le calcul d’un produit de polynômes, d’éléments d’un corps
fini, ou encore de matrices), quel est le nombre minimal de
multiplications sur le corps de base nécessaires pour évaluer cette
application ?

Ainsi, par exemple, la méthode de Karatsuba permet de calculer le
produit de deux polynômes linéaires en seulement trois multiplications
au lieu de quatre. Nous donnons dans cet exposé une formalisation du problème du rang
bilinéaire, connu pour être NP-dur, et proposons un algorithme général
permettant de calculer efficacement des solutions exactes, qui nous
permettent ainsi de prouver l’optimalité de, voire même d’améliorer
certaines bornes de la littérature.

Brigitte Vallée (Greyc)

Jeudi 15 mars 10h15-11h15 en salle 117 :

De l’importance de la ≪ discipline ≫ des sources

En algorithmique, on manipule très souvent des mots,
notamment dans l’algorithmique du texte ; plus souvent encore, on
manipule ce qu’on appelle communément des clés, notamment dans les
algorithmes de tri ou de recherche, ou dans les bases de données. Dans
une perspective plus réaliste, il faut vraiment considérer une clé
comme une suite finie de symboles, c’est-à-dire comme un mot. On
remplace alors le coût unitaire d’une comparaison entre deux clés par
le coût de la comparaison entre deux mots, égal au nombre de symboles
comparés. L’algorithmique générale devient alors de l’algorithmique du
texte, et le processus qui produit les mots, appelé source, acquiert
alors une grande importance.

Notre programme, décrit dans l’exposé, est donc le suivant

(a) Donner un sens à ce qu’on appelle une ≪ source générale ≫, opérer
une classification des sources, en exhibant des sous-classes
intéressantes, reliées notamment à des systèmes dynamiques.

(b) Définir une série génératrice (de type Dirichlet) reliée
canoniquement à la source et relier la classification des sources à la
classification (analytique) de leurs séries de Dirichlet.

(c) Exhiber une propriété importante de la source, appelée la ≪
discipline ≫, et, dans le cas des sources dynamiques, donner des
conditions suffisantes qui entraînent la discipline.

Jeudi 8 mars 10h15-11h15 en salle 117 :

Variantes du problème “Learning With Error”

La cryptographie reposant sur les réseaux euclidiens
connaît aujourd’hui un essor rapide. Elle a de nombreux attraits tel
que sa simplicité, son efficacité potentielle, son apparente
résistance aux attaques quantiques et surtout ses preuves de sécurités
sous des hypothèses de difficulté algorithmique de problèmes bien
compris. L’un de ces problèmes est le problème moyen-cas Learning With
Error (LWE). A partir de ce problème et de sa variante structurée RLWE
sont construites de nombreuses primitives cryptographiques. Les
problèmes (R)LWE sont connus pour être aussi difficiles à résoudre que
des problèmes pires-cas portant sur les réseaux.

Cet exposé commencera par la présentation de LWE et de son lien avec
la cryptographie. Puis nous présenterons sa variante structurée RLWE
et nous ferons le lien entre les deux problèmes avec un problème noté
MLWE. La difficulté de MLWE sera étudiée en généralisant les preuves
de difficulté existantes de LWE et RLWE. La dernière partie portera
sur l’étude d’un paramètre de LWE, noté q, qui définit l’anneau dans
lequel le problème se place. Les réductions existantes ne prouvent la
difficulté de (R)LWE que sous des conditions sur q très restrictives,
qui empêchent par exemple l’adaptation plus efficace à RLWE de
constructions cryptographiques basées sur LWE. Notre travail, combiné
avec les résultats existants, permet de montrer que la forme
arithmétique de q n’a pas d’influence sur la difficulté des problèmes
LWE, RLWE et MLWE.

Paolo Ienne (Ecole Polytechnique Fédérale de
Lausanne)

Jeudi 1er mars 10h15-11h15 en salle 117 :

The Irresistible Charm of Automation

For decades, researchers in computer arithmetic have
explored empirically the available design space and have identified
through their intuition some incredibly cunning solutions which have
progressively enriched the tools of the trade of the designers. With
complexity growing exponentially and short design time becoming
increasingly critical to market dominance, it is high time to begin
automating the design process at these relatively high levels of
abstraction. Despite the impressive progress of logic synthesis in the
past decades, finding the best architecture for a given circuit still
remains an open problem and largely unsolved. In most arithmetic
circuits, the outcome of the synthesis tools depends on the input
description of the circuit. In other words, logic synthesis
optimizations hardly change the architecture of the given circuit.
Achieving useful results in the synthesis of arithmetic components is
far from trivial because the space of possible solutions is enormous,
often the problem constraints are ill defined or complex to capture,
human intuition is incredibly powerful to cope with such broad
constraints, and more than half a century of human research is hard to
challenge using only dumb computers. Nevertheless, the temptation for
design automation to penetrate these classically empirical domains is
irresistible and some tangible advances are possible: even reproducing
some results known in literature, fully automatically and without
embedding any prior knowledge, is a significant achievement. And,
occasionally, automation can lead to unexpected rewards.

Eleonora Guerrini (LIP)

Jeudi 9 février 10h15-11h15 en salle 117 :

Une caractérisation algébrique pour les codes
correcteurs systématiques

Dans cet exposé un modèle algébrique pour la
caractérisation des codes correcteurs systématiques est présentée.
L’intérêt pour cette famille dérive de la volonté de généraliser les
codes linéaires afin de rechercher des codes qui soient optimaux du
point de vue de la distance. En fait, il est possible démontrer que
tout code linéaire est équivalent à un code systématique, mais il
existe des codes systématiques non-linéaires à distance plus grande de
tout linéaire qui partage les mêmes paramètres n (longueur) k
(dimension). A la base de cette approche, le fait que tout code
systématique est en correspondance avec la base de Groebner réduites
de son idéal d’annulation. Grâce à cette correspondance, nous
décrivons un algorithme qui, étant donnés n,k,d entiers, fournit la
car- actérisation des codes systématiques avec paramètres n,k et
distance au moins d. Le point central de l’algorithme est le calcul de
la base de Groebner d’un certain idéal B(n,k,t) qui est invariant sous
l’action d’un groupe de permuta- tions et présente de propriétés
d’inclusions dans d’autres idéaux du même type (par ex. dans
B(n+1,k,t+1) et B(n+1,k+1,t)). Avec des techniques similaires, il est
possible aussi de formuler un algorithme pour le calcul de la
distribution des distances d’un code systématique et une nouvelle
borne pour la distance de ces codes.

Philippe Langlois (DALI/LIRMM, Univ. Perpignan)

Jeudi 2 février 10h15-11h15 en salle 117 :

Sur  les  algorithmes  de  sommation  précise  ou
correcte  et  leurs

performances réelles

Résumé: Il est fascinant qu’un problème  a priori
aussi simple que calculer la somme  de N  nombres  flottants soit
l’objet d’autant  d’algorithmes. L’enjeu est (au  moins) double : être
précis tout  en étant rapide. En matière  de   précision  de  la
somme  calculée,   on  distingue  les algorithmes inverses stables,
ceux qui améliorent la précision obtenue mais qui  continuent à
dépendre du conditionnement  (simulation d’une précision arbitraire
de calcul) et enfin ceux  qui calculent (presque toujours)  le
meilleur résultat  possible.  Des publications récentes proposent
plusieurs  exemples  de  cette dernière  catégorie  —  qui
correspond  en quelque  sorte au  Graal  en la  matière. La  meilleure
précision étant atteinte, la rapidité devient discriminante. Nous nous
intéresserons à cet aspect.  Nous expliciterons pourquoi les résultats
publiés et  visant à “prouver que  mon algorithme est  plus rapide que
les   précédents”  ne  sont   pas  entièrement   satisfaisants.   Nous
présenterons des  pistes, des outils  et des résultats  permettant une
meilleure  appréciation  du  réel  potentiel  de  performance  de  ces
algorithmes précis.

Florent de Dinechin (LIP)

Jeudi 26 janvier 10h15-11h15
en salle 115 (attention: ce n’est pas la salle habituelle) :

All you ever wanted to know about division by 3 (for
various values of 3)

Quite often, application-specific architectures have to divide by a
small constant, for example when computing a running average over a
window of size 3 or 9, or for binary/decimal conversion. There has
been a lot of research on multiplication by a constant, and of
course these works can be also be applied to division by a constant.
We will present two original methods specific to division by a small
constant.

The first observes that a constant like 1/3 is periodic, and tries
to exploit it systematically in several classical constant
multiplication techniques. This can be generalized to multiplication
by arbitrary rational constants.

The second is based on the classical division iteration, which when
the constant is small can be implemented as a small look-up table.
This is a perfect match for the fine structure of an FPGA. In both
case we will present architectures for integer and floating-point
implemented in FloPoCo. For floating-point, we will also show that
correct rounding to the nearest, for the two algorithms, is for
free.

Valérie Berthé (CNRS/Liafa)

Jeudi 12 janvier 10h15-11h15 en salle 117 :

Title: Droites discrètes et calculs de pgcd : une
vision dynamique

Abstract: Cet exposé est motivé par la question de
la construction effective d’approximations de droites discrètes dans
l’espace.  Nous présentons un mode de construction qui repose  sur des
algorithmes  unimodulaires de calcul de pgcd. Parmi  ces derniers, on
peut  trouver des méthodes définies à   partir d’algorithmes de
réduction dans les réseaux, ou encore d’algorithmes type Euclide qui
permettent  d’approcher une  direction   par  des suites de cônes
emboîtés engendrés par les colonnes  de  matrices  unimodulaires  à
entrées entières. Nous allons  considérer plus particulièrement ces
dernières méthodes et les systèmes dynamiques associés, afin de tenter
de les comparer.

Sylvie Boldo (INRIA/Proval et LRI)

Jeudi 5 janvier 10h15-11h15 en salle 117 :

Title: Preuve de programmes d’analyse numérique

Abstract: Cet exposé montrera les résultats du
projet FOST: nous avons formellement prouvé un programme C
implémentant un schéma numérique simple pour la résolution de
l’équation des ondes acoustiques en dimension 1. Nous avons annoté ce
programme et l’avons complètement prouvé. Cela inclut l’absence
d’erreur à l’exécution, la majoration mathématique usuelle de l’erreur
de méthode due au schéma numérique et bien sûr une borne sur les
erreurs d’arrondi.

Jingyan Jourdan-Lu (LIP/Arénaire et
STMicroelectronics)

Jeudi 15 décembre 10h15-11h15 en salle 117 :

Title: Software for squaring floats on ST231: a case
study in bringing floating-point to VLIW integer processors

Abstract: In this talk we will consider the problem
of computing IEEE floating-point squares by means of integer
arithmetic. We will show how the specific properties of squaring can
be exploited in order to design and implement algorithms that have
much lower latency than those for general multiplication, while still
guaranteeing correct rounding.

Our algorithm descriptions are parameterized by the floating-point
format, aim at high instruction-level parallelism (ILP) exposure, and
cover all rounding modes.

We will show further that their C implementation for the binary32
format yields efficient codes for targets like the ST231 VLIW integer
processor from STMicroelectronics, with a latency at least 1.75x
smaller than that of general multiplication in the same context. We
will conclude by examining the impact of such special operators on
some FFT-related floating-point applications. The speedup is around
1.28x to 1.46x.Key words: squaring, binary floating-point arithmetic, correct
rounding, IEEE 754, instruction level parallelism, C software
implementation, VLIW integer processor.

Sylvain Collange

Jeudi 8 décembre 10h15-11h15 en salle 117 :

Title: Single-Instruction, Single-Data,
Multiple-Thread: how to kill 32 birds with one stone.

Abstract: Parallel applications running on GPUs are
show that control-flow, computations and memory accesses are highly
computations on close values.

This effect can be leveraged in many ways to improve the energy
efficiency of parallel architectures, thanks to hardware mechanisms
such as SIMD execution, memory access coalescing, instruction sharing,
cache and memory compression.

In particular, we will examine how execution units and memory access
units can be optimized when the high-order bits or the floating-point
exponents of their parallel inputs are the same.

Christoph Lauter (LIP6, équipe Péquan)

Jeudi 1er décembre 10h15-11h15 en salle 117 :

Title: Arrondis fidèle et correct de la norme L2 d’un
vecteur de longueur n (travail commun avec S. Graillat, S. Oishi et N.
Yamanaka)

Abstract:

En partant des travaux présentés par Brisebarre et al. à ARITH 20 sur
l’arrondi correct de la norme de vecteurs de longueur 2, nous allons
analyser la faisabilité d’un arrondi fidèle, voire correct, de la
norme L2 de vecteurs de flottants de longueur n. Il est évident qu’à
cause de la longueur variable du vecteur, ces travaux ne peuvent pas
se baser sur une recherche de pires cas mais doivent exploiter le
caractère algébrique de la fonction “norme”.

Une analyse montre que la racine carrée correctement arrondie

* de l’arrondi correct de sum x_i^2 donnerait un arrondi fidèle de
la norme

* et que celle d’un arrondi fidèle de sum x_i^2 ne donne pas
d’arrondi fidèle de la norme.

Pourtant, il n’est pas nécessaire d’extraire la racine carré d’une
approximation de sum x_i^2 en double-double pour obtenir un arrondi
fidèle de la norme. La somme intermédiaire d’un algorithme de
sommation compensé peut être arrondie à un seul flottant avant
l’extraction de la racine carrée.

Cette analyse permet alors de présenter plusieurs algorithmes de
calcul de la somme sum x_i^2 adaptés au problème de la norme. Ces
algorithmes compensés se comportent plus ou moins bien sur les
processeurs pipelinés et parallels actuels, ce que nous investiguons.

L’arrondi fidèle forme la base pour l’arrondi correct de la norme d’un
vecteur. On montre qu’un algorithme naïf peut être obtenu facilement
par une simple composition d’algorithmes de sommation connus. Cet
algorithme serait pourtant très inefficace: il nécessiterait jusqu’à
trois parcours de la somme des x_i^2, rendue cancellante par un rajout
de l’opposé du carré de l’arrondi fidèle de la norme.

Pour réduire ce nombre de parcours, on présente une modification du
test d’arrondi de Ziv, utilisé traditionnellement pour les fonctions
élémentaires. Le test modifié décide non seulement si l’arrondi
correct est déjà possible, mais calcule aussi le milieu des deux
flottants autour duquel l’arrondi change. Ainsi, il sera possible de
réduire le nombre des parcours de la somme à un seul dans la plupart
des cas et à deux dans les quelques cas difficiles à arrondir.

Marc Mezzarobba (LIP/Arénaire)

Jeudi 24 novembre 10h15-11h15 en salle 117 :

Title: Autour de l’évaluation numérique des fonctions
D-finies

Abstract: Je proposerai une synthèse de mes travaux
de thèse, qui ont comme thème commun l’évaluation numérique des
fonctions dites D-finies. Les  fonctions D-finies sont les solutions
d’équations différentielles linéaires à coefficients polynomiaux. Il
s’agit de développer des outils pour leur évaluation (et divers
problèmes apparentés) qui travaillent à partir d’une équation
différentielle accompagnée d’un jeu convenable de conditions
initiales.

Le leitmotiv est de proposer des méthodes à la fois générales
(couvrant autant que possible toutes les fonctions D-finies),
garanties (bornes d’erreur rigoureuse) et efficaces (tant en pratique
que du point de vue de la complexité). Les travaux présentés explorent
trois principales directions.1. Le calcul de *majorants fins* sur les suites solutions de
récurrences linéaires à coefficients polynomiaux, qui comprennent
comme cas particulier les développements en série entière de fonctions
D-finies. Les bornes obtenues interviennent à de multiples reprises
pour garantir la précision des calculs numériques.

2. La mise en pratique d’un algorithme de *prolongement analytique
numérique* à grande précision, proposé par Chudnovsky et Chudnovsky à
la fin des années 1980 puis étendu par van der Hoeven. Sans être le
plus rapide à petite précision, cet algorithme sert de base à une
approche efficace et garantie pour évaluer une fonction D-finie
quelconque en n’importe quel point de son domaine de définition et à
des précisions pouvant atteindre des millions de chiffres décimaux.

3. Le calcul de *polynômes d’approximation*. La question est ici
d’obtenir à faible coût un polynôme de degré imposé qui approche bien
une fonction D-finie donnée, au sens de la norme uniforme sur un
segment, ainsi qu’une borne sur l’erreur d’approximation. De tels
polynômes sont utiles, par exemple, pour évaluer la fonction à
précision modérée en plusieurs points du segment.

En ce qui concerne les deux premiers points, on s’attache à couvrir le
cas des points singuliers réguliers des équations différentielles. La
plupart des algorithmes en jeu sont implémentés, notamment dans le
module Maple NumGfun présenté au groupe de travail Arénaire fin 2010.
Certains de ces travaux ont été menés en collaboration avec Alexandre

Benoit, Mioara Joldeş ou Bruno Salvy.

Nicolas Brunie (LIP/Arénaire et Kalray)

Jeudi 17 novembre 10h15-11h15 en salle 117 :

Title: Deep integration of reconfigurable fabrics

Abstract: FPGAs have triggered a great interest for
reconfigurable computing. However their integration into complete
systems often appears as the new bottleneck. To overcome this
difficulty, deeply integrated reconfigurable fabrics have been
proposed. Such fabrics are located closer to the CPU than external
FPGAs, and can use it for control and memory access purposes.

Our work focuses on a massively parallel architecture and proposes a
new reconfigurable architecture for high performance dataflow-oriented
computing on this platform.

We will study the system (work in progress) with the low-level
compilation framework, the reconfigurable structure and its
application to the Advanced Encryption Standard, focusing on the
arithmetic specificities of the reconfigurable fabric needed to
efficiently support this cryptographic application.

Markus Püschel (ETH Zürich)

Vendredi 4 novembre 10h15-11h15 en salle 116 :

Spiral: Can We Teach Computers to
Write Fast Libraries?

Writing fast software has become extraordinarily
difficult. For optimal performance, programs and their underlying
algorithms have to be adapted to take full advantage of the platform’s
parallelism, memory hierarchy, and available instruction set. To make
things worse, the best implementations are often platform-dependent
and platforms are constantly evolving, which quickly renders libraries
obsolete.In this talk we present Spiral (www.spiral.net), a domain-specific
program generation system for important functionality used in signal
processing, communication, and scientific computing including linear
transforms and filters, Viterbi decoders, and basic linear algebra
routines. Spiral completely replaces the human programmer. For a
desired function, Spiral generates alternative algorithms, optimizes
them, compiles them into programs, and “intelligently” searches for
the best match to the computing platform. The main idea behind Spiral
is a mathematical, declarative, domain-specific framework to represent
algorithms and the use of rewriting systems to generate and optimize
algorithms at a high level of abstraction. Optimization includes
parallelization for vector architectures, shared and distributed
memory platforms, and even FPGAs. Experimental results show that the
code generated by Spiral competes with, and sometimes outperforms, the
best available human-written code. Spiral has been used to generate
part of Intel’s commercial libraries IPP and MKL.

Laurent-Stéphane Didier (LIP6, en CRCT au sein
d’Arénaire)

Jeudi 20 octobre 10h15-11h15 en salle 117 :

Estimation d’intervalles dynamiques
par méthodes statistiques

En pratique, les concepteurs d’applications
embarquées construisent leurs modèles numériques, les testent et les
simulent en virgule flottante. Ils les portent ensuite sur les
architectures qu’ils ciblent. Il s’agit généralement de matériels
n’ayant pas d’unité flottante. Dans ce contexte, la représentation la
plus adaptée est la virgule fixe.  Afin de correctement dimensionner
la représentation à virgule fixe utilisée, deux étapes sont
nécessaires. La  première consiste à déterminer les intervalles
dynamiques dans lequels vont

évoluer les variables, la seconde la précision avec laquelle on peut
effectuer les calculs. Dans ce travail, les concepteurs d’applications
embarquées ont rarement les moyens d’estimer rapidement et simplement
ces deux éléments.

Dans cet exposé un modèle statistique pour estimer les intervalles
dynamiques de variables est proposé ainsi que quelques exemples
pratiques.

Arne Storjohann (University of Waterloo)

Jeudi 13 octobre 10h15-11h15 en salle 117 :

Inverting integer and polynomial matrices

The exact inverse of an integer or polynomial matrix
of dimension n can require a factor of n more space to write down than
the input matrix.  Traditional inversion methods, such as

Chinese remaindering or Newton iteration, cost a factor of n more than
this.  In this talk I survey three recent improvements in computing
and representing the inverse.  The first is the double-divide and
conquer method of Jeannerod & Villard (2005), the first nearly
optimal algorithm in the case of generic polynomial matrices.  The
second is the sparse inverse expansion formula, which gives a
representation as a straight line formula whose size is only a
logarithmic factor larger than the input matrix.  Finally, I will
discuss the outer product adjoint formula, which gives a Las Vegas
algorithm to compute the exact inverse of a well conditioned integer
matrix in nearly optimial time.

Jean-Michel Muller (CNRS, LIP/Arénaire)

Jeudi 6 octobre 10h15-11h15 en salle 117 :

Some issues related to double roundings

Double rounding is a phenomenon that may occur when
different floating-point precisions are available on a same system, or
when performing scaled operations whose final result is subnormal.
Although double rounding is, in general, innocuous, it may change the
behavior of some useful small floating-point algorithms. We analyze
the potential influence of double roundings on the Fast2Sum and 2Sum
algorithms, and on some summation algorithms. We also show how to
handle possible double roundings when performing scaled Newton-Raphson
division iterations (to avoid possible underflow problems).

Didier Henrion (LAAS-CNRS, University of Toulouse,
France, and Faculty of Electrical Engineering, Czech Technical
University in Prague, Czech Republic)

Mardi 27 septembre 10h15-11h15 en salle 117 :

Polynomial optimisation, LMI and dynamical systems

The first part of the talk surveys achievements of
the last decade on polynomial optimisation and semidefinite
programming (linear matrix inequalities, LMI) with a focus on the
generalised problem of moments and existing software tools. The second
part of the talk deals with a recent extension of these techniques to
analysis of nonlinear ordinary differential equations and polynomial
optimal control.

Jean-François Biasse (University of Calgary).

Jeudi 15 septembre 10h15-11h15 en salle 117 :

Réduction de bases sur un module et application au
décodage en liste.

Nous nous intéresserons au calcul d’une pseudo-base
d’un R-module de dimension finie. Un tel calcul peut s’effectuer via
la mise sous forme pseudo-HNF d’une pseudo-matrice tel que l’a décrit
Cohen dans “Advanced topics in computational number theory”. Après
avoir défini cette généralisation de la forme canonique de Hermite,
ainsi que les algorithmes permettant son calcul, nous présenterons un
algorithme récent dû à Fieker et Stehlé permettant le calcul de bases
réduites. Nous étudierons le lien entre ces algorithmes et le décodage
en liste de codes basés sur les corps de nombres. L’encodage de ces
codes s’effectue via la réduction du message modulo des idéaux
premiers entre eux, de manière analogue à celle des codes basés sur
les restes chinois (codes CRT), des codes de Reed-Solomon, ainsi que
des codes Algébriques-géométriques. Contrairement à leurs analogues,
les codes basés sur les corps de nombres n’étaient jusqu’à présent pas
décodable en temps polynomial. Je présenterai un algorithme décrit en
collaboration avec Guillaume Quintin permettant de réaliser ce défi.

# 2010-2011

Xiao-Wen Chang (McGill University).

Mardi 12 juillet 13h30-14h30 en salle 117 :

Lecture 1: Characterizing Matrices That Are Consistent
with Given Solutions.

This lecture is about ‘backward’ problems for linear
systems Fy = b. Specifically, given two vectors y and b we want to
find the sets of all matrices F such that y is the exact solution, the
ordinary least squares (OLS) solution, the data least squares (DLS)
solution, and the scaled total least square (STLS) solution. A unified
unitary transformation approach to handling these problems will be
presented.

Jeudi 21 juillet 10h15-11h15 en salle 117 :

Lecture 2: Backward Perturbation Analysis of Linear
Systems .

Give an approximate solution to the linear
equations, the OLS problem, the DLS problem and the STLS problem, this
lecture gives backward perturbation analyses to find a perturbation in
the data of minimal size such that the approximation solution is the
exact solution of the corresponding perturbed problems by using the
results derived in Lecture 1. The results are useful in two aspects.
One is to check if a computed solution is a backward stable solution.
The other is to design effective stopping criteria for the iterative
solution of large sparse problems.

Jeudi 16 juin 2011 à 10h15 en salle 118 :

Marine Minier (INSA de Lyon, laboratoire CITI).

New LFSRs and FCSRs representations for stream ciphers
hardware and software design.

In this talk, we will sum up our recent research results concerning
the introduction of a new representation for FCSRs based upon a
known LFSRs representation. This matrix based representation allows
to construct FCSRs with a more compact hardware representation and a
quicker diffusion while preserving the usual and proven good
properties (good periods, l-sequences, good statistical behaviors,
etc.). Moreover, this new approach circumvents the weaknesses of the
Fibonacci and Galois representations. We also show how to extend the
LFSRs representation to a particular LFSR case called the windmill
case.

LFSRs are well-known primitives used in cryptography especially
for stream cipher design. However they have some drawbacks when
looking at their resistance against algebraic attacks because of
their linearity. In the contrary, FCSRs are inherently resistant to
algebraic attacks due to the non-linearity of the update function.
Using the new representation, we propose two new stream ciphers
based on the so-called “ring” FCSR representation. The first
proposal called F-FCSR is dedicated to hardware applications whereas
the second proposal called X-FCSR is designed for software purposes
but is also efficient in hardware.

Jeudi 9 juin 2011 à 10h15 en salle 118 :

Laurent Fousse (Université Joseph Fourier de
Grenoble, équipe CASYS, LJK).

Le chiffrement homomorphe « dense » de Benaloh revu et
corrigé.

En 1994, Josh Benaloh a décrit un chiffrement homomorphe
probabiliste « dense », qui améliore notablement le facteur
d’expansion du chiffrement de Goldwasser et Micali. De nombreux
articles ont ensuite utilisé ce chiffrement dans des applications
diverses (vote électronique, calcul de confiance dans un réseau,
partage de secret non-interactif…). Je montrerai que ce
chiffrement, tel qu’il est décrit dans l’article de Benaloh, est en
fait incorrect, et certains choix malheureux de clef publique
peuvent conduire à un déchiffrement ambiguë. J’illustrerai l’impact
de cette erreur sur des applications concrètes, et je donnerai une
analyse du problème via le calcul de la probabilité d’apparition de
paramètres incorrects; ainsi qu’une version corrigée (et prouvée) du
chiffrement.

Il s’agit d’un travail en commun avec Pascal Lafourcade et Mohamed
Alnuaimi qui sera présenté à AFRICACRYPT 2011.

Mardi 31 mai 2011, à 15h en salle 117 :

Bruno Salvy (INRIA, EPI Algorithms).

Itération de Newton combinatoire et applications.

De nombreuses familles d’objets discrets définis récursivement
peuvent être modélisées par des systèmes d’équations combinatoires.
L’itération de Newton s’étend à ce contexte. Elle mène à une méthode
d’itération quadratique pour calculer des structures. En
conséquence, l’énumération de ces objets peut être effectuée en
complexité quasi-optimale. De plus, cette itération se traduit en un
schéma numérique important pour la génération aléatoire efficace.
L’exposé donnera une introduction à ce domaine et ses idées. Il
s’agit d’un travail en commun avec Carine Pivoteau et Michèle Soria.

Jeudi 26 mai 2011 à 10h15 en salle 118 :

Alexandre Kouyoumdjian (Arénaire).

Caches compressés sur GPU .

Les processeurs graphiques (GPU) exécutent des applications
massivement parallèles. Ils maintiennent en vol plus de 10000
threads concurrents pour masquer les latences mémoire. Ce
parallélisme extrême limite fortement la quantité de données locales
dont chaque thread peut disposer, en particulier en ce qui concerne
les caches.

On observe cependant que les données temporaires des applications
de calcul généraliste sur GPU présentent des motifs réguliers : les
vecteurs affines. On peut tirer parti de cette régularité pour
compresser les caches des GPU, et ainsi augmenter virtuellement leur
capacité. Cela permet de réduire le trafic vers la mémoire externe,
améliorant la performance et/ou la consommation d’énergie des GPU.

Jeudi 12 mai 2011 à 10h15 en salle 118 :

San Ling (School of Physical and Mathematical
Sciences, Nanyang Technological University, Singapor).

On the Construction of Asymmetric Quantum
Error-Correcting Codes.

Quantum error-correcting codes were introduced to protect quantum
information from decoherence and quantum noise. Ever since their
introduction in the mid 1990’s, much progress has been made on their
construction. A rather prevalent idea that has proved to be very
useful is to construct quantum codes using classical
error-correcting block codes.

More recently, it was also established from experiments that, in
many quantum systems, phase-flip errors happen more frequently than
bit-flip errors. Asymmetric quantum error-correcting codes, which
take advantage of this asymmetry, have been introduced as a result
(as opposed to the more established symmetric ones which do not
distinguish between these two types of errors).

As in the case for classical error-correcting block codes, there
are known bounds constraining the parameters of quantum codes. One
such bound is the quantum Singleton bound, and the codes whose
parameters satisfy this bound are known as maximum distance
separable (MDS) quantum codes. An interesting problem is to
construct optimal quantum codes, in the sense that codes of better
parameters can be proved not to exist. Of course, a code that
attains some known bound on the parameters is therefore naturally
optimal.

In this talk, we will focus on the construction of optimal
asymmetric quantum codes using suitable classical error-correcting
block codes, via purely mathematical approaches (i.e., no physics!).
In particular, many MDS quantum codes and some other optimal ones
are constructed. No pre-requisite knowledge of error-correcting
codes will be assumed. Essential ideas and facts in coding theory
will be introduced.

Résumé disponible au format pdf.

Jeudi 31 mars 2011 à 10h15 en salle 118 :

Stéphane Gaubert (INRIA et CMAP, École
Polytechnique).

Aspects tropicaux des problèmes de valeurs propres: un
tour d’horizon

On présentera ici un tour d’horizon de la théorie spectrale
max-plus ou “tropicale” en dimension finie, en mettant en exergue
ses relations avec la théorie spectrale classique. Les valeurs
propres tropicales apparaissent en effet quand on s’intéresse à des
problèmes de perturbation de valeurs propres, ou bien à des
asymptotiques “à température nulle” (matrices de transfert). Elles
permettent d’obtenir par des algorithmes de nature combinatoire
(résolution d’un problème d’affection paramétrique) les ordres de
grandeur (valuations de séries de Puiseux) de valeurs propres de
matrices ou de faisceaux perturbés. Cette approche permet de
résoudre des cas singuliers dans la théorie des perturbations de
Lidskii, qui traite de perturbations plus ou moins génériques de
blocs de Jordan. On obtient par la même approche des mises à
l’échelle (scalings) qui peuvent être mises à profit pour calculer
numériquement les valeurs propres de matrices ou de polynômes
matriciels, en réduisant l’erreur “backward” (on améliore ainsi un
scaling proposé par Fan, Lin et Van Dooren). Nous donnerons aussi un
aperçu des correspondances entre problèmes spectraux, éventuellement
non-linéaires, et problèmes de jeux à somme nulle. Cet exposé
s’appuie sur des travaux communs avec Marianne Akian et Ravindra
Bapat (pour la partie relative aux perturbations) et avec Meisam
Sharify (pour l’application au calcul numérique).

Jeudi 10 mars 2011 à 10h00 en salle 118 :

Adaptive precision and lattice basis reduction in
algebraic geometry

There is a growing trend to use numerical (or
symbolic-numeric) methods to compute quantities and structures of
interest to algebraic geometers. The core computation of this sort is
the use of homotopy methods to approximate solutions of polynomial
systems. In this talk, I will briefly describe how these methods work
(without assuming any prior knowledge of algebraic geometry),
including the use of adaptive precision linear algebra and parallel
computation. I will also briefly indicate two places where lattice
basis reduction methods have an impact in algebraic geometry. I will
not make any assumptions about prior knowledge of algebraic geometry
or numerical algebraic geometry.

Takeshi Ogita (Tokyo Woman’s Christian University)

Robust inverse matrix factorizations

Recently, several algorithms for inverse matrix
factorizations have been proposed by the author, which include inverse
LU, QR, Cholesky and SVD. In this talk a framework for such algorithms
and concrete altorithms are introduced. Numerical results are
presented showing the effectiveness of the proposed algorithms.

Jeudi 24 février 2011 à 10h15 en salle 118 :

Xavier Pujol, Arénaire

Analyse de BKZ / Analysis of BKZ

La réduction forte de réseaux est la clé de la plupart des attaques
contre les cryptosystèmes basés sur les réseaux. Entre la réduction
HKZ, forte mais trop coûteuse, et la réduction LLL, plus faible mais
rapide, existent plusieurs tentatives d’obtenir des compromis
efficaces. Celle qui semble atteindre le meilleur rapport
temps/qualité est la réduction BKZ, introduite par Schnorr et
Euchner en 1991. Cependant, aucune borne de complexité raisonnable
sur cet algorithme n’était connue jusqu’à maintenant. Nous prouvons
qu’après O~(n3/k2) appels à une routine de HKZ-réduction, BKZ_k
renvoie une base telle que la norme du premier vecteur est bornée
par ~= gamma_k^(n/(2(k-1))) * (det L)^(1/n). Le principal outil de
la preuve est l’analyse d’un système dynamique linéaire associé à
cet algorithme.

Strong lattice reduction is the key element for most attacks
against lattice-based cryptosystems. Between the strongest but
impractical HKZ reduction and the weak but fast LLL reduction, there
have been several attempts to find efficient trade-offs. Among them,
the BKZ algorithm introduced by Schnorr and Euchner in 1991 seems to
achieve the best time/quality compromise in practice. However, no
reasonable time complexity upper bound was known so far for BKZ. We
give a proof that after O~(n3/k2) calls to a k-dimensional HKZ
reduction subroutine, BKZ_k returns a basis such that the norm of
the first vector is at most ~= gamma_k^(n/(2(k-1))) * (det L)^(1/n).
The main ingredient of the proof is the analysis of a linear dynamic
system related to the algorithm.

Jeudi 3 février 2011 à 10h15 en salle 118 :

Martin Langhammer et Steven Perry, Altera

Jeudi 20 janvier 2011 à 10h15 en salle 118 :

Thi Minh Tuyen NGUYEN (INRIA Saclay – Île-de-France,
EPI PROVAL).

Hardware-independent proofs of numerical programs

On recent architectures, a numerical program may give different
answers depending on the execution hardware and the compilation. Our
goal is to formally prove properties about numerical programs that
are true for multiple architectures and compilers. We propose an
approach that states the rounding error of each floating-point
computation whatever the environment and the compiler choices. This
approach is implemented in the Frama-C platform for static analysis
of C code. A small case study using this approach will be presented.

Mercredi 19 janvier 2011 à 10h15 en salle 118 :

Siegfried M. Rump (Institute for Reliable Computing,
Hamburg University of Technology).

Structured perturbations for matrices and pseudospectra

Important structured perturbations of matrices are, for example,
symmetric or Toeplitz or circulant perturbations. We show that in
many cases the unstructured and strucutred condition number
coincide. In particular an extension of the famous
Eckart/Young/Gastinel/Kahan Theorem is proved, namely that the
reciprocal of the (structured) condition number and the (structured)
distance to the nearest singular matrix coincide. This is true for
normwise perturbations, but changes completely for componentwise
perturbations. In the latter case a problem can be perfectly
well-conditioned (condition number one) for structured
perturbations, whereas it is arbitrarily sensitive to unstructured
perturbations.

Jeudi 13 janvier 2011 à 10h15 en salle 118 :

Fabien Laguillaumie (GREYC, Université de Caen).

NICE : une cryptanalyse imaginaire et une cryptanalyse
réelle

NICE est un algorithme de chiffrement basé sur l’arithmétique des
corps quadratiques ayant pour principal intérêt son déchiffrement en
temps quadratique. Dans cet exposé, nous montrerons les
cryptanalyses totales des versions imaginaire et réelle. En
particulier, nous présenterons un algorithme original de
factorisation des entiers de la forme pq^2 utilisant des résultats
classiques sur les formes quadratiques et des techniques de
recherche de petites racines de polynômes. Cet algorithme est
déterministe et sa complexité, en générale exponentielle, dépend
essentiellement du régulateur du corps quadratique de discriminant
p. Les systèmes NICE fournissent des données permettant de rendre la
complexité de notre algorithme polynomiale sur ces entrées et donc
de retrouver la clé secrète en quelques secondes sur un PC standard.

Travail en commun avec G. Castagnos, A. Joux et P. Nguyen

Jeudi 6 janvier 2011 à 10h15 en salle 118 :

Nicolas Estibals (CARAMEL, LORIA).

Accélérateur matériel compact pour le calcul du
couplage de Tate sur des courbes supersingulières de 128 bits de
sécurité

Les couplages interviennent dans des protocoles cryptographiques
de plus en plus nombreux tel que le chiffrement basé sur l’identité
ou la signature courte. Dès lors, fournir une implémentation
efficace des couplages cryptographique pour un grand nombre de
supports, et plus spécialement les systèmes embarqués, devient un
chalenge intéressant.

Nous présentons une nouvel méthode pour concevoir des
accélérateurs matériels compacts pour le cacul du couplage de Tate
sur des courbes supersingulières de petite caractéristique. Du fait
de leur degré de plongement (embedding degree) limité, ces courbes
ne sont généralement pas utilisées pour atteindre la sécurité
standard de 128 bits. En effet, la taille du corps fini de
définition d’une courbe à ce niveau de sécurité est très grande.
Afin de palier cet effet, nous considérons des courbes
supersingulières définies sur des corps finis de de degré
d’extension modérément composé (Fpnm
avec n «petit» et m premier). Ces courbes deviennent
alors vulnérables aux attaques basées sur la descente de Weil mais
une analyse fine de celles-ci nous permet de montrer que leur impact
reste limité et que nous pouvons maintenir la sécurité au-dessus de
128 bits.

Nous appliquons alors cette méthode à une courbe supersingulière
définie sur F35.97 et décrivons ainsi
une implémentation FPGA d’un accélérateur pour le calcul de couplage
à 128 bits de sécurité. Sur FPGA de gamme moyenne (Xilinx Virtex-4
FPGA), cet accélérateur cacule le couplage en 2.2 ms sur une surface
limitée à 4755 slices.

Jeudi 16 décembre 2010 à 10h15 en salle 118 :

Ioana Pasca (Arénaire, LIP).

Vérification formelle pour les méthodes numériques.

Dans cet exposé, je présenterai les principales contributions de ma
thèse, préparée sous la direction d’Yves Bertot au sein du projet
Marelle de l’INRIA Sophia Antipolis, et soutenue le 23 novembre
2010.

Cette thèse s’articule autour de la formalisation de mathématiques
dans l’assistant à la preuve Coq dans le but de vérifier des
méthodes numériques. Plus précisément, elle se concentre sur la
formalisation de concepts qui apparaissent dans la résolution des
systèmes d’équations linéaires et non-linéaires.

Dans ce cadre, on a analysé la méthode de Newton, couramment
utilisée pour approcher les solutions d’une équation ou d’un système
d’équations. Le but a été de formaliser le théorème de Kantorovitch
qui montre la convergence de la méthode de Newton vers une solution,
l’unicité de la solution dans un voisinage, la vitesse de
convergence et la stabilité locale de la méthode. L’étude de ce
théorème a nécessité la formalisation de concepts d’analyse
multivariée. En se basant sur ces résultats classiques sur la
méthode de Newton, on a montré qu’arrondir à chaque étape préserve
la convergence de la méthode, avec une corrélation bien déterminée
entre la précision des données d’entrée et celle du résultat. Dans
un travail commun avec Nicolas Julien nous avons aussi formellement
étudié les calculs avec la méthode de Newton effectués dans le cadre
d’une bibliothèque d’arithmétique réelle exacte.

Pour les systèmes linéaires d’équations, on s’est intéressé aux
systèmes qui ont une matrice associée à coefficients intervalles.
Pour résoudre de tels systèmes, un problème important qui se pose
est de savoir si la matrice associée est régulière. On a fourni la
vérification formelle d’une collection de critères de régularité
pour les matrices d’intervalles.

Voir aussi le lien vers le manuscript.

Jeudi 9 décembre à 10h15 en salle 118 :

Alexandre Benoit et Marc Mezzarobba (Algorithms,
INRIA Rocquencourt).

Generalized Fourier Series for Solutions of Linear
Differential Equations (A. Benoit)

Chebyshev polynomials, Hermite polynomials, Bessel functions and
other families of special functions each form a basis of some
Hilbert space. A Generalized Fourier Series is a series expansion in
one of these bases, for instance a Chebyshev series. When such a
series solves a linear differential equation, its coefficients
satisfy a linear recurrence equation. We interpret this equation as
the numerator of a fraction of linear recurrence operators. This
interpretation lets us give a general algorithm for computing this
recurrence, and a simple view of existing algorithms for several
specific function families.

Joint work with Bruno Salvy.

NumGfun : calculs numériques et analytiques sur les
fonctions D-finies (M. Mezzarobba)

L’objet de cet exposé est de présenter NumGfun, un
module Maple consacré à la manipulation « analytique » des solutions
d’équations différentielles linéaires à coefficients polynomiaux
(fonctions dites D-finies ou holonomes). Les principales
fonctionnalités de NumGfun concernent l’évaluation numérique des
fonctions D-finies, et le calcul symbolique de bornes «
asymptotiquement fines » qui permettent de contrôler leur
comportement. En particulier, NumGfun contient ce qui semble être la
seule implémentation générale des algorithmes de prolongement
analytique numérique des fonctions D-finies donnés par D.V. et G.V.
Chudnovsky à la fin des années 1980. Je ferai une démonstration de
l’utilisation de NumGfun, et je renviendrai sur quelques-uns des
algorithmes sur lesquels il s’appuie, notamment pour assurer la
connexion numérique entre points ordinaires et points singuliers
réguliers des équations différentielles.

Jeudi 2 décembre 2010 à 10h15 en salle 118 :

Romain Cosset (CARAMEL, LORIA).

Calcul d’isogénies entre courbes de genre 2

Les isogénies sont un outil très important pour la cryptographie
basée sur les courbes. Elles permettent, entre autre, de transférer
le calcul du logarithme discret vers un groupe où il est plus
facile. Les formules de Vélu permettent de les calculer pour les
courbes de genre 1. Dans cet exposé, je décrirai un algorithme
permettant de les calculer entre n’importe quelle variété abélienne
et en particulier entre les jacobiennes de courbes hyperelliptiques
de genre 2.

Ceci est un travail commun avec Gaëtan Bisson, David Lubicz et
Damien Robert.

Jeudi 25 novembre 2010 à 10h15 en salle 118 :

David Monniaux (CNRS).

On using sums-of-squares for exact computations without
strict feasibility.

In order to prove that a system of real polynomial inequalities has
no solution, the usual method is to compute a cylindrical algebraic
– for instance, using Mathematica’s Reduce function. CAD
implementations, especially those that implement optimizations
necessary to go beyond toy examples, are notoriously complex and
trusting such a massive amount of code for applications such as
proving mathematical theorems or proving safety-critical programs is
methodologically disputable.

Instead of trusting a complex implementation, or building a
certified version thereof, we would prefer the complex algorithm
produces an easily checkable “witness” of its answer. For systems of
polynomials inequalities, the Positivstellensatz theorems ensure the
existence of witnesses made of sums of squares of polynomials. As a
particular case, a nonnegative polynomial can often be expressed as
a sum of squares of polynomials, and can always be expressed as a
quotient of sums of squares of polynomials.

Such sums of squares are solutions of a semidefinite programming
feasibility problem (that is, given F_0, …, F_n symmetric
matrices, finding y_i such that -F_0 + \sum_i y_i F_i is
semidefinite positive, that is, has no negative eigenvalues).
Unfortunately, direct application of semidefinite programming
numerical methods is only possible if the solution set has nonempty
interior. This is not the case in general.

We propose a workaround for this problem, based on LLL lattice
reduction.

Jeudi 18 novembre, à 10h15 en salle 118.

Guillaume Melquiond (INRIA, EPI Proval, LRI).

Flocq: A Unified Library for Proving Floating-point
Algorithms in Coq.

Several formalizations of floating-point arithmetic have been
designed for the Coq system, a generic proof assistant. Their
different purposes have favored some specific applications: program
verification, high-level properties, automation. Based on our
experience using and/or developing these libraries, Sylvie Boldo and
I have built a new system that is meant to encompass the other ones
in a unified framework. It offers a multi-radix and multi-precision
formalization for various floating- and fixed-point formats. This
fresh setting has been the occasion for reevaluating known
properties and generalizing them. This talk will present the Flocq
system: a library easy to use, suitable for automation yet
high-level and generic.

Vendredi 8 octobre 2010 à 10h15 en salle 117 :

Peter Kornerup (SDU/Odense University).

Floating Point Arithmetic on Round-to-Nearest
Representations.

Joint work with Jean-Michel Muller and Adrien Panhaleux

During any composite computation there is a constant need for
rounding intermediate results before they can participate in further
processing. Recently a class of number representations denoted
RN-Codings were introduced, allowing an un-biased
rounding-to-nearest to take place by a simple truncation. In this
paper we investigate a particular encoding of the binary
representation which is essentially an ordinary 2’s complement
representation with an appended round-bit, allowing rounding to
nearest by truncation. Not only is this round-to-nearest a constant
time operation, so is also sign inversion, both of which are at best
log-time operations on ordinary 2’s complement representations. A
significant benefit of this representation is that problems with
“double-roundings” are avoided, i.e., the effect of two (or more)
subsequent roundings is identical to a single rounding to the final
precision. Addition and multiplication on such fixed-point
representations are defined in such a way that rounding information
can be carried along in a meaningful way, at minimal cost. Based on
the fixed-point encoding we define a floating point representation,
and describe details of a possible implementation of a floating
point arithmetic unit employing this representation.

Jeudi 23 septembre 2010 à 10h15 en salle 118 :

Pascal Molin (IMB – Université Bordeaux 1 ).

Intégration numérique rapide et prouvée par les
fonctions à décroissance doublement exponentielle.

La méthode d’intégration numérique dite double-exponentielle est
une méthode de quadrature extrêmement efficace, utilisée dans la
plupart des logiciels (Maple, Mathematica, sage… mais surtout
PARI/gp). On présentera brièvement une théorie de cette méthode et
des résultats de convergence prouvée qui la rendent pertinente en
théorie des nombres, en particulier comme algorithme sous-jacent au
calcul de certaines fonctions définies par des intégrales. On
essaiera surtout de donner une intuition du champ d’application et
de ses limites, et diverses techniques d’amélioration de la
convergence.

Jeudi 16 septembre 2010 à 10h15 en salle 116 :

Vadim Lyubashevsky (Foundations of Computing group at
Tel-Aviv University).

Public-Key Cryptographic Primitives Provably as Secure
as Subset Sum.

Building a provably-secure public key cryptosystem based on the
hardness of the subset sum problem has been an intriguing problem
since the late 1970’s. In this work, we give a very simple
construction of such a scheme. In addition to the simplicity of the
construction and the security proof, the hardness assumption of our
scheme is weaker than that of all other existing subset-sum based
encryption schemes, namely the lattice-based schemes of Ajtai and
Dwork (STOC ’97), Regev (STOC ’03, STOC ’05), and Peikert (STOC
’09). This is joint work with Adriana Palacio and Gil Segev that
appeared at the Theory of Cryptography Conference (TCC) 2010.

# 2009-2010

Jeudi 17 juin 2010 à 10h15 :

Pascal Giorgi (Université de Montpellier 2, LIRMM,
ARITH).

On Polynomial Multiplication using Chebyshev basis.

In this talk, we will present a novel method to handle the
multiplication of polynomials represented in the Chebyshev basis.
The main concern of this work is to reduce the complexity of
polynomial multiplication using Chebyshev basis to the one of the
polynomial multiplication in monomial basis. Another point of this
work is to see practical benefit of such reduction to achieve better
performances.

Jeudi 10 juin 2010 à 10h15, en salle 118 :

Jean-Michel Muller (CNRS, LIP, Arénaire).

Quelques aspects de la conversion décimal-binaire en
virgule flottante.

On s’intéresse à la conversion d’un format virgule flottante
binaire vers un format décimal (et réciproquement), avec deux buts
différents: soit la conversion pour elle même (correctement
arrondie), soit pour implanter une fonction en décimal en faisant
appel aux programmes en arithmétique binaire. Ceci demande à
examiner différents problèmes: “midpoints” (nombres d’un format qui
sont le milieu exact de deux nombres consécutifs de l’autre format),
valeurs “difficiles à arrondir”, etc. On propose également un
algorithme rapide de conversion nécessitant la disponibilité d’une
instruction de type “fused multiply and add”.

Jeudi 3 juin 2010 à 10h30, en salle 118 :

Andrew Novocin (INRIA, LIP, Arénaire).

State of the art univariate polynomial factorization.

This talk presents some recent and on-going work on factoring
polynomials in Z[x]. The goal of this work is to arrive at a
practical, implementable, and efficient algorithm for which we can
prove a tight complexity bound (which should be the best existing
complexity bound). We give the overall structure of our algorithm
with insights into the practical and theoretical issues which guide
the design. We also present the behavior of the algorithm in theory
and give some early running times of our implementation.

Jeudi 6 mai 2010 à 10h15, en salle 118 :

Alfredo Buttari (CNRS, IRIT, Toulouse).

Mixed Precision Iterative Refinement Techniques for the
Solution of Linear Systems.

On modern architectures, the performance of 32-bit operations is
often at least twice as fast as the performance of 64-bit
operations. By using a combination of 32-bit and 64-bit floating
point arithmetic, the performance of many dense and sparse linear
algebra algorithms can be significantly enhanced while maintaining
the 64-bit accuracy of the resulting solution. The approach
presented here can apply not only to conventional processors but
also to other technologies such as Field Programmable Gate Arrays
(FPGA), Graphical Processing Units (GPU), and the STI Cell BE
processor. Results on modern processor architectures and the STI
Cell BE are presented.

Vendredi 30 avril 2010 à 14h, en salle 118 :

Friedrich Eisenbrand (EPFL, Lausanne).

Diophantine Approximation and Real-time Scheduling.

Real-Time Scheduling problems deal mostly with the arrangements of
periodically released jobs are respected. The area is of increasing
importance since many safety and liveness critical control
mechanisms are replaced by software-driven systems that are based on
principles from Real-Time scheduling. In this talk I will address
the similarities of some problems in Real-Time scheduling and
Diophantine approximation. I will outline how this similarity could
be exploited to provide proofs of longstanding open complexity
problems in the area of Real-Time Scheduling.

The talk is based on joint work with Thomas Rothvoss.

Jeudi 8 avril 2010 à 10h15, en salle 118 :

Laboratory, EPFL, Lausanne).

Reducing the Existing Gap Between FPGAs and ASICs in
Arithmetic Oriented Circuits.

Due to high non-recurring engineering costs, Application Specific
Integrated Circuits (ASICs) are not economically feasible for
low-volume product lines. Moreover, long time to market in ASIC
designs makes them less attractive for fast growing and
ever-changing products. One potential alternative for ASIC is Field
Programmable Gate Array (FPGA). FPGAs offer many advantages
including reduced non-recurring engineering and shorter time to
market. However, these advantages come at the cost of a decrease in
performance, an increase in silicon area, and an increase in power
consumption when designs are implemented on FPGAs. This talk will
summarize some of our contributions in reducing the mentioned gaps
for arithmetic oriented circuits. My presentation will have two
parts. First, I will discuss a new synthesis method targeting FPGAs
as they exist today. Then, I will present a number of FPGA
architectural improvements including a new FPGA logic block with
carry chains that are specialized for carry-save arithmetic and a
new DSP block specialized for the same purpose.

Computer Engineering and Computer Architecture from University of
Tehran in 2001 and 2004, respectively. Since 2007, he has been a
doctoral scholar with the Processor Architecture Laboratory (LAP) at
the Ecole Polytechnique Federale de Lausanne (EPFL), one of the two
Swiss Federal Institutes of Technology. His research interests span
a number of fields, including: reconfigurable computing,
application-specific processors and their design tools, computer
arithmetic and computer architecture. He was the recipient of the
best paper award at the International Conference on Field
Programmable Logic and Applications (FPL), 2009.

Jeudi 25 mars 2010 à 10h30, en salle 118 :

Clément Pernet (Laboratoire LIG lab, projet
INRIA-MOAIS, Université Joseph Fourier).

Décodage adaptatif pour les systèmes multimodulaires
redondants.

Dans le contexte de calcul distribué sur ressources non-sûres
(architectures de calcul globales, pair à pair, …), des noeuds de
calculs malicieux peuvent engendrer des erreurs de type byzantine
qu’il faut être capable de détecter et corriger pour assurer la
sécurité et la fiabilité d’un calcul par des algorithmes tolérants
aux pannes (ABFT: algorithm based fault tolerance).

Dans le domaine du calculs exact (dans les anneaux des entiers ou
de polynômes à coefficients dans un corps), la parallélisation
repose fortement sur l’algorithme des restes chinois où chaque
calcul modulaire est une tâche indépendante. En ajoutant quelques
calculs modulaires supplémentaires, les codes arithmétiques
introduisent une redondance permettant de reconstruire le résultat,
malgré des erreurs non localisées dans certains des calculs
modulaires. Ces codes peuvent être présentés de manière unifiée pour
les systèmes de résidus redondants sur les polynômes et les entiers,
et généralisant les codes de Reed-Solomon.

Nous présenterons plusieurs variantes de l’application de
l’algorithme d’Euclide étendu, rendant le taux de correction
adaptatif. Différents critères de terminaison permettent
l’utilisation de ces codes sans connaissance à priori de leur
paramètres et par conséquent de décoder au mieux avec la redondance
disponible. Ils permettent en outre de combiner la tolérance aux
pannes avec les approches de terminaison anticipée.

Jeudi 11 mars 2010 à 10h30, en salle 118 :

Mioara Joldes (Arénaire, LIP).

Chebyshev Interpolation Polynomial-based Tools for
Rigorous Computing

Performing numerical computations, yet being able to provide
rigorous mathematical statements about the obtained result, is
required in many domains like global optimization, ODE solving or
integration. Taylor models, which associate to a function a pair
made of a Taylor approximation polynomial and a rigorous remainder
bound, are a widely used rigorous computation tool. This approach
benefits from the advantages of numerical methods, but also gives
the ability to make reliable statements about the approximated
function.

A natural idea is to try to replace Taylor polynomials with better
approximations such as minimax approximation, Chebyshev truncated
series or interpolation polynomials. Despite their features, an
analogous to Taylor models, based on such polynomials, has not been
yet well-established in the field of validated numerics. In this
talk we propose two approaches for computing such models based on
interpolation polynomials at Chebyshev nodes.

We compare the quality of the obtained remainders and the
performance of the approaches to the ones provided by Taylor models.
We also present two practical examples where this tool can be used:
supremum norm computation of approximation errors and rigorous

This talk is based on a joint work with N. Brisebarre.

Jeudi 4 mars 2010 à 10h30, en salle 118 :

Jordan Ninin (IRIT, Toulouse).

Technique de reformulation affine appliquée à
l’optimisation globale.

Les algorithmes de branch and bound par intervalles ont réalisé
d’énormes progrès depuis ces trente dernières années, en montrant
notamment leur efficacité à résoudre, de manière exacte et robuste,
des problèmes d’optimisation globale non-convexes en dimensionnement
de machines électriques.

Notre technique de reformulation affine (ART) est une méthode de
relaxation linéaire applicable à n’importe quel problème non-convexe
(dont les expressions sont explicites); ART génère, de manière
robuste, un problème linéaire de petite taille sans aucune variable
ajoutée.

De nombreux tests viennent confirmer notre approche, ainsi qu’une
comparaison avec l’algorithme GlobSol de Kearfott (intégrant une
autre méthode de relaxation linéaire).

Jeudi 11 février 2010 à 10h15, en salle 118 :

Alin Bostan (Projet Algorithms, INRIA
Paris-Rocquencourt).

The complete generating function for Gessel walks is
algebraic.

The aim of the talk is to show how a difficult combinatorial
problem has been recently solved using an experimental-mathematics
approach combined with (rather involved) computer algebra
techniques. More precisely, let g(n,i,j) denote the number of
lattice walks in the quarter plane which start at the origin, end at
the point (i,j), and consist of n unit steps going either west,
south-west, east, or north-east. In the early nineties, Ira Gessel
conjectured that the sequence of excursions g(n,0,0) is holonomic.
We will present the computer-driven discovery and proof of the
following generalization, obtained in August 2008 together with
Manuel Kauers: the trivariate generating series of the sequence
(g(n,i,j))_{n,i,j} is an algebraic function.

Jeudi 21 janvier 2010 à 10h30, en salle 118 :

Bogdan Pasca (Arénaire, LIP).

Integer addition is an elementary building block (multiplication as
well) of complex arithmetic operators. This study presents a binary
integer addition operator generator. Based on resource estimations,
our generator selects the best design out of an extensible set of
in the context of FloPoCo, a core generator for FPGAs, and is the
building blocks of more complex operators: floating-point addition,
multiplication, squaring, square-root and other.

Jeudi 17 décembre à 10h30, en salle 118 :

Bruno Grenet (MC2, LIP).

Difficulté du résultant multivarié.

Le résultant d’un système (générique) de polynômes f_1,…,f_n est
un polynôme en les coefficients des f_i qui s’annule si et seulement
si les f_i admettent une racine commune. Dans le cas de deux
polynômes à une variable, le résultant est le déterminant de la
matrice de Sylvester et est donc calculable en temps polynomial.

Le cas qui nous intéressera est celui d’un système de n polynômes
homogènes en n variables. Canny a donné en 1987 un algorithme en
espace polynomial pour calculer ce résultant, mais la complexité
exacte reste inconnue. Nous verrons dans cet exposé que le problème
de décision associé (décider de la nullité du résultant) est dans la
hiérarchie polynomiale (classe AM, au deuxième niveau de la
hiérarchie), et qu’il est NP-difficile.

Si le temps le permet, je parlerai également d’un problème relié
qui est le calcul du déterminant de matrices de grande taille

Jeudi 10 décembre à 10h30, en salle 118 :

David Coeurjolly (LIRIS, CNRS, Université de Lyon).

Géométrie Arithmétique Discrète

La géométrie discrète est un domaine de l’informatique à la croisée
de l’analyse d’images, de la géométrie algorithmique, de
l’arithmétique, de la combinatoire ou encore de la théorie des
nombres. L’objectif de ce séminaire est de faire une petite
introduction à ces problématiques avec un regard particulier sur
l’utilisation de principes arithmétiques élémentaires pour accélérer
les algorithmes d’analyse.

Jeudi 3 décembre à 10h30, en salle 118 :

Javier Bruguera (Grupo de Arquitectura de

Trying to improve the performance of floating-point
computations

The floating-point representation is the standard representation
for real numbers and widely used in today processors and engineering
and scientific applications. This representation outperforms other
alternative representations that are kept limited to specific
applications. However, some computations can result in a loss of
accuracy because of the accumulation of rounding errors in the
floating-point operations. This loss of accuracy can be mitigated by
an increase in the precision of the representation but this requires
a larger amount of storage and more complex and slower operations.
We explore a modification of the floating-point representation to
handle some of the above-mentioned issues.

Alternatively, the error introduced can be computed and used to
determine if the result is accurate enough for the application being
implemented. Traditional approaches for the error estimation are
software-based, which result in complex implementations. We
introduce a hardware error estimate that can be implemented
concurrently with the standard floating-point operations.

Jeudi 26 novembre à 10h30, en salle 118 :

Michel Kieffer (L2S – CNRS – Supélec – Université
Paris-Sud).

Efficient 16-bit floating point interval processor for
embedded applications

Résumé disponible au format pdf.

Mardi 24 novembre à 15h00, en salle 115 :

William B. Hart (Mathematics Institute, University of
Warwick, Coventry, UK).

FLINT : Fast Library for Number Theory

I will describe the library FLINT which is being developed to make
low level routines such as polynomial arithmetic and linear algebra
fast. I will give some indication of the recent developments, both
algorithmic and computational, in FLINT, and give some comparisons
with other similar projects.

Jeudi 19 novembre à 10h30, en salle 118 :

Vincent Lefèvre (Arénaire).

Preuves par tests exhaustifs en basse précision

Je présenterai des exemples de preuves conçues à l’aide de tests
exhaustifs effectués en basse précision. Je montrerai tout d’abord
comment prouver l’optimalité (sous certaines conditions) de
l’algorithme TwoSum
dans une précision fixée p ≥ 12 par un test de tous les
algorithmes potentiels, et comment cette preuve permet de déduire
l’optimalité de cet algorithme pour toute précision p
12. Je présenterai ensuite la bibliothèque SIPE
(Small Integer Plus Exponent) que j’ai écrite afin de pouvoir
effectuer efficacement ce genre de tests. Je mentionnerai une
seconde application de SIPE, permettant de
conjecturer une borne d’erreur optimale de l’algorithme DblMult,
dans le but de prouver cette borne par la suite (travail en cours).

Jeudi 5 novembre à 10h30, en salle 118 :

Christophe Mouilleron (Arénaire).

Calculs rapides avec des matrices denses structurées.

De nombreux problèmes peuvent se résoudre par l’intermédiaire de
calculs matriciels. Parfois, les matrices qui apparaissent lors de
la résolution d’un problème peuvent être codées avec beaucoup moins
de n² coefficients. C’est notamment le cas pour les matrices
Toeplitz, Vandermonde et Cauchy.

Dans cet exposé, on va présenter un cadre général pour couvrir ces
matrices dites « denses structurées ». On se concentrera ensuite sur
le problème de la multiplication par un vecteur, puis sur celui de
l’inversion. Après un survol des algorithmes rapides présents dans
la littérature, on exhibera un nouvel algorithme d’inversion plus
direct et moins coûteux en mémoire pour les structures usuelles.

Jeudi 22 octobre à 10h30, en salle des thèses :

Laurent Théry (EPI Marelle, INRIA Sophia).

Calculer efficacement pour prouver.

Des exemples comme le théorème des 4 couleurs ou la conjecture de
Kepler (prouvée par Thomas Hales) illustrent comment la puissance de
calcul des ordinateurs peut être mise à profit pour établir de
nouveaux résultats mathématiques. De tels exemples semblent donc
très attractifs pour les assistants de preuve comme Coq qui
permettent de formaliser les mathématiques. Malheureusement la
puissance de calcul proposée par ces assistants est souvent très
faible. Par exemple, Coq ne permet que d’écrire des programmes dans
un cadre de programmation très restrictif: la programmation
fonctionnelle pure. Dans cet exposé, on présentera nos efforts pour
augmenter la puissance de calcul de Coq tout en conservant la sûreté
des résultats obtenus.

Jeudi 15 octobre à 10h30, en salle 118 :

Hong Diep Nguyen (Arénaire, LIP-ENS Lyon).

Implantation efficace pour le produit de matrices
intervalles.

Résumé : Dans cet exposé, nous rappellerons la définition et le
coût du produit de matrices par intervalles par l’algorithme « naïf
». L’idée pour obtenir une implantation efficace pour le produit de
matrices intervalles est de tirer pleinement parti des bibliothèques
qui sont bien optimisées pour le produit de matrices flottantes.
Nous étudierons quelques cas particuliers que nous utiliserons pour
mettre au point un algorithme utilisant peu de produits de matrices
ponctuelles (c’est-à-dire des matrices dont les éléments sont des
réels, ou, pour l’implantation, des flottants). Sous certaines
conditions particulières, cet algorithme fournit un résultat exact
(en arithmétique exacte). En général, il fournit un résultat
sur-estimant le résultat exact, le facteur de surestimation au pire
sera également donné lors de cet exposé.

Jeudi 8 octobre à 10h30, en salle 118 :

Álvaro Vázquez Álvarez (Arénaire, LIP-ENS Lyon).

Hardware units for decimal arithmetic.

Current financial, e-commerce and user-oriented applications make
an intensive use of integer and fractional numbers represented in
decimal radix. This makes an attractive opportunity for
microprocessor manufactures to provide a dedicated DFU (decimal
floating-point unit) in their new high-end microprocessors. This
interest is supported by the recent approval of the revision of the
IEEE 754 floating-point standard (IEEE 754-2008), which incorporates
a specification for decimal arithmetic.

In this context, this presentation deals with the research and
design of new algorithms and high-performance architectures for DFX
(decimal fixed-point) and DFP (decimal floating-point) to evaluate
in hardware the basic arithmetic operations, that is, add/sub,
multiplication (and fused multiply-add) and division. Moreover, a
decimal new CORDIC unit for computing transcendentals is also
detailed. These units are suitable for implementing in high-end VLSI
microprocessors. A discussion about the benefits/costs of decimal
hardware VLSI vs. FPGA and software implementations of the IEEE
754-2008 standard is also provided.

The structure of this presentation is as follows:

1. The new demands for decimal arithmetic.
2. Hardware solutions vs. software libraries
3. Decimal specification in the IEEE 754-2008 standard.
4. Decimal fixed and floating-point hardware units for the basic
arithmetic operations.

2. Multiplication
4. Division
5. Decimal CORDIC unit for the evaluation of elementary functions.
6. Conclusions and open questions.

Jeudi 1er octobre à 10h15, en salle 118 :

Nicolas Louvet (UCBL, EPI Arénaire, LIP-ENS Lyon).

Calcul ou estimation du conditionnement d’un problème
par différentiation automatique.

Le nombre de conditionnement d’un problème permet de mesurer la
sensibilité de sa solution à des perturbations de ses données.
Lorsqu’une solution approchée est calculée en arithmétique
flottante, la combinaison d’une analyse d’erreur inverse et du
nombre de conditionnement du problème permet en particulier de
déduire une borne d’erreur valable au premier ordre. Même si le
calcul du conditionnement ne permet pas en général de certifer
l’erreur entachant la solution approchée, au sens où la borne
obtenue n’est valable qu’au premier ordre, il s’agit d’un outil
indispensable afin de comprendre la qualité des bornes d’erreurs
certifiées.

Dans cet exposé, nous présenterons l’état de nos travaux en cours
sur le calcul du conditionnement à l’aide de la différentiation
automatique. Lorsqu’un algorithme exact est connu pour calculer la
solution d’un problème donné, nous verrons en effet que la
différentiation automatique permet de calculer ou d’encadrer très
précisément le conditionnement de ce problème. Mais ce calcul peut
s’avérer être plus coûteux, de plusieurs ordres de grandeurs, que
l’algorithme de résolution du problème. Nous montrerons donc
également comment estimer le conditionnement, pour un coût multiple
du coût de l’algorithme de résolution employé.

# 2008-2009

Lundi 6 juillet à 10h15, en amphi B :

Jean-Paul Allouche (CNRS, LRI, Orsay).

Ubiquité de la suite de Thue-Morse.

La suite de (Prouhet-)Thue-Morse peut être définie comme la limite
de la suite de mots obtenus en partant de 0 et en itérant la
substitution 0 → 01, 1 → 10

0

01

0110

01101001

Cette suite a des propriétés d’ubiquité que nous nous proposons de
parcourir. Elle et ses cousines serviront de fil conducteur à une
promenade à travers plusieurs domaines des mathématiques ou de la
physique : théorie des nombres (transcendance, séries de Dirichlet),
itération des fonctions continues réelles, combinatoire des mots,
opérateurs de Schrödinger discrets et quasicristaux, …

Nous ferons aussi allusion à certaines compositions musicales, à
des Kolam indiens (dessins rituels qu’on retrouve aussi dans la
tradition des dessins sur le sable aux îles Vanuatu), ainsi qu’à des
résultats récents sur les développements en base non-entière.

Un Kolam indien

Voir aussi : http://www.lri.fr/~allouche/lyon.html.

Vendredi 3 juillet à 12h45, en amphi B :

Ping Tak Peter Tang (D. E. Shaw Research, LLC).

A Novel Approach to Tight Bounds and Statistical
Information of Rounding Errors.

Obtaining bounds on rounding errors are difficult in practice in
the sense that bounds can be overly pessimistic or simplistic. In
this talk, I will present some work on automatically generating
information for fixed point computation of linear transforms. The
resulting information includes not only worst case error bound but
also probability distributions of errors.

Jeudi 25 juin à 10h15, en amphi A :

Damien Stehlé (CNRS, currently seconded to the
Universities of Sydney and Macquarie).

Chiffrer à l’aide des réseaux idéaux.

Il y a un peu plus de 10 ans, le cryptosystème NTRU a démontré
l’efficacité de la cryptographie reposant sur des réseaux
structurés. Malheureusement, la sécurité de NTRU est seulement
heuristique. Construire un cryptosystème efficace utilisant des
réseaux structurés et admettant une preuve de sécurité reposant sur
la difficulté d’un problème établi était donc une question ouverte
importante ; d’autant plus qu’il s’agit d’une des alternatives les
plus prometteuses à la cryptographie moderne dans l’hypothèse d’un
environnement post-quantique.

Dans cet exposé, je décrirai un schéma de chiffrement à clé
publique résistant aux attaques à clair choisi, dont la sécurité
repose de manière rigoureuse sur la difficulté des instances les
plus difficiles du problème du plus court vecteur restreint à un
certain type de réseaux, appelés réseaux idéaux. Sous l’hypothèse
que la résolution de ces instances requiert un temps exponentiel, y
compris avec un ordinateur quantique, notre schéma de chiffrement
résiste à toute attaque sous-exponentielle et possède une efficacité
(quasi-)optimale : si n est le paramtère de sécurité, les clés sont
de tailles softO(n), et le coût amorti du chiffrement et du
déchiffrement est softO(1) par bit de message.

Il s’agit d’un travail en commun avec Ron Steinfeld, Keisuke
Tanaka et Keita Xagawa.

Jeudi 18 juin à 10h15, salle de conseil du LIP :

Midpoints et exact-points de quelques fonctions
usuelles

Lors de l’implantation d’une fonction mathématique f
avec arrondi correct en arithmétique flottante, il est souvent
intéressant de savoir s’il existe des flottants x pour lesquels f(x)
est – soit équidistant de deux flottants consécutifs (i.e. f(x) est un
“midpoint”), – soit exactement un flottant (i.e. f(x) est un
“exact-point”). En effet, la connaissance des midpoints et des
exact-points d’une fonction permettra souvent de simplifier le test
d’arrondi correct, et d’obtenir ainsi une implantation plus efficace.
Dans le cas de l’arrondi au plus proche par exemple, l’absence de
midpoints simplifie considérablement le test d’arrondi final. Le but
de ce travail est d’étudier les midpoints et les exact-points de
quelques fonctions usuelles. Nous avons regroupé les résultats connus,
valides essentiellement en base 2. Nous étendons ces résultats en
considérant de nouvelles fonctions, en traitant le cas de la base 10,
mais aussi en prenant en compte des formats d’entrée et de sortie
différents. L’exposé se concentrera sur deux fonctions (l’inverse et
l’inverse de la norme 2D), ce qui permettra de présenter la plupart
des mécanismes de preuve utilisés.

Jeudi 4 juin 2009 à 10h15, salle du conseil du LIP
:

Matthieu Martel (Equipe DALI, Laboratoire ELIAUS,
Université de Perpignan).

Amélioration de l’implantation de formules
mathématiques en arithmétiques fixe et flottante

Cet exposé porte sur des techniques permettant
d’estimer et d’améliorer la qualité numérique de calculs effectués
dans différentes arithmétiques des ordinateurs. La méthode proposée
est indépendante de l’arithmétique utilisée et sera appliquée aux cas
des systèmes à virgules fixes et flottantes. Nous nous intéresserons
tout d’abord à mesurer globalement la qualité de l’implantation d’une
formule vis-à-vis d’une certaine mesure. Pour l’arithmétique à virgule
flottante, cette mesure concernera la distance entre le résultat exact
et le résultat arrondi, dans le pire cas. Pour l’arithmétique à
virgule fixe, cette mesure concernera le nombre de bits nécessaires
pour représenter tous les résultats intermédiaires. Nous verrons
ensuite comment les opérations responsables des principales
dégradation de la qualité d’une implémentation peuvent être
identifiées automatiquement. Cette information est utile pour
améliorer manuellement la qualité des programmes. Enfin, nous
proposerons une technique entièrement automatique pour transformer une
formule en une autre formule, mathématiquement équivalente à
l’originale et de meilleure qualité par rapport au critère considéré.

Lundi 18 mai 2009 à 13h00, amphi B :

Mioara Joldes (Arénaire)

Certified and fast computation of supremum norms of
approximation errors

In many numerical programs there is a need for a
high-quality floating-point approximation of useful functions f, such
as exp, sin, erf. In the actual implementation, the function is
replaced by a polynomial p, which leads to an approximation error
(absolute or relative) eps = p-f or eps = p/f-1. The tight yet certain
bounding of this error is an important step towards safe
implementations. The problem is difficult mainly because that
approximation error is very small and the difference p-f is subject to
high cancellation. Previous approaches for computing the supremum norm
in this degenerate case, have proven to be unsafe, not sufficiently
tight or too tedious in manual work. We present a safe and fast
algorithm that computes a tight lower and upper bound for the supremum
norms of approximation errors. The algorithm is based on a combination
of several techniques, including enhanced interval arithmetic,
automatic differentiation and isolation of the roots of a polynomial.
We have implemented our algorithm and give timings on several
examples.

Jeudi 14 mai 2009 à 10h15, amphi B (3ème étage du
bâtiment GN1) :

Richard Brent (Australian National University)

Factoring and testing irreducibility of sparse
polynomials over small finite fields

We consider algorithms for testing irreducibility
and (if they are not irreducible) finding factors of polynomials over
finite fields. In the talk we consider the case of characteristic 2,
but the algorithms generalise to other small positive characteristics.
The algorithms are efficient for sparse polynomials of high degree.
They have been applied to find primitive trinomials whose degree is
the exponent of a Mersenne prime, including the (largest known) case
of degree 43112609. [Joint work with Paul Zimmermann.]

Jeudi 7 mai 2009 à 10h15, salle du conseil du LIP :

Joris van der Hoeven (Équipe d’analyse harmonique,
Dpt de mathématiques d’Orsay, Université Paris 11)

Calcul rapide et fiable avec des séries en grande
précision

Dans l’exposé, on rappellera d’abord la technique
du calcul détendu sur les séries formelles, qui permet d’obtenir un
grand nombre de coefficients d’une série en temps presque linéaire. On
s’intéressera plus particulièrement aux séries à coefficients
flottants, ce qui nécessite de techniques supplémentaires pour assurer
une bonne stabilité numérique. Enfin, on considérera le cas où les
coefficients sont des boules (ou intervalles), et on exposera des
méthodes pour borner les restes des séries sur des boules.

Jeudi 30 avril 2009 à 10h15, salle du conseil du
LIP :

Single floating point libraries Development and
Comparison

Floating Point standard is mainly though for
software machine and the fixed wordlength of the memories used in that
environments. However, when implementing floating point operators on
FPGA, we can take advantage of FPGA flexibilities to implement
floating point operators with a better performance than just applying
directly the standard. This way, we have developed a series of
different libraries of single floating point operators
exponential, logarithm and powering) where we have introduced some
simplifications to the standard. In the first library we have
implemented the floating point standard without any substantial
simplification while in another three libraires we have gradually
introduced the following simplifications:

1. Denormalized numbers simplification.
2. Truncation rounding.
3. Hardware representation.

A final comparison of the results obtained for the four libraries (and
when possible with Xilinx floating point Core) shows the very
important improvements obtained for the final library containing the
three simplifications with respect to the standard library:

1. Improvements in speed and use of resources.
2. Decrease in the number of pipeline stages.
3. General simplification of the architectures needed for each
operand.

Inverse CDF function for FPGA’s with Quintic Hermite
Interpolation

Inverse cumulative distribution function (ICDF) is
very important in the field of Random Number Generators (RNG) as it
allows to implement a RNG with any distribution using an uniform RNG
and de ICDF of the target distribution.

The main problem of this approach is how to implement the ICDF. One
very interesting possibility is to use the Quintic Hermite
interpolation that is a high quality interpolation method to implement
ICDFs. In this method the input range of a ICDF [0,1] is segmented and
in each segment the function is interpolated with a five degree
polynomial that has different coefficients for each of the segment. We
have proposed how to implement these type of functions specially for
FPGA’s to obtain high performance with a throughput of one data per
cycle. Our implementation includes tailored arithmetic, non-uniform
non hiereachical segments and an adapted search algorithms (they are
needed to search for the corresponding segment for an input) to avoid
software multicycle search.

Jeudi 23 avril 2009 à 10h00, en amphi K :

Siegfried M. Rump (Univ. Tech. de Hambourg)

Ultimately fast algorithms for summation and dot
products

In the past years we developed fast algorithms for
the summation of floating-point numbers as well as the dot product of
two vectors. Such problems are ubiquitous in scientific computations.
We have algorithms producing a result as if computed with some
specified precision as well as algorithms producing a result with a
specified accuracy. These seem to be the fastest known algorithms for
these tasks. They extensively use so-called error-free transformations
of floating-point operations. Now we can present an additional
improvement of these algorithms of up to 25 % computing time.
Numerical examples, in particular for the inversion of extremely
ill-conditioned matrices and large sparse matrix-vector multiplication
will demonstrate the behaviour of our algorithms.

Stef Graillat (PEQUAN, LIP6, Univ. Paris 6)

Calcul précis de racines simples de polynômes en
précision finie

Dans cet exposé, nous présenterons tout d’abord
l’algorithme de Horner compensé qui permet d’évaluer un polynôme avec
une précision similaire à celle qu’on aurait obtenue si on avait
effectué les calculs en précision doublée. Nous utiliserons ensuite
cet algorithme pour calculer le résidu dans l’algorithme de Newton
pour le calcul d’une racine simple. Nous montrons que la précision du
calcul d’une racine simple est similaire à celle qu’on aurait obtenue
si on avait effectué les calculs dans l’algorithme de Newton en
précision doublée.

Vendredi 20 mars 2009 à 10h15, salle du conseil du
LIP :

Florent de Dinechin et Honoré Takeugming (Arénaire)

Traitement du signal pour les transmissions optiques à
40Gb/s

Pour augmenter jusquà 40Gb/s, et peut-être 100Gb/s,
le débit d’information qu’on peut faire passer sur les fibres optiques
des générations précédentes, l’approche la plus pertinent actuellement
est le multiplexage en polarisation. Le projet ANR TCHATER a pour but
la réalisation d’un démonstrateur pour de telles transmissions. On
présentera essentiellement la partie traitement du signal, qui est en
cours d’implémentation sur FPGAs, et les challenges (pas seulement
arithmétiques) que nous rencontrons.

Mercredi 11 mars 2009 à 13h30, salle du conseil du LIP. Exposé de
Jean-Luc Beuchat (Univ. Tsukuba, Japon) : Hardware Operators for
Pairing-Based Cryptography.

Résumé. Originally introduced in cryptography by Menezes, Okamoto &
Vanstone (1993) then Frey & Rück (1994) to attack the discrete
logarithm problem over a particular class of elliptic curves, pairings
have since then been put to a constructive use in various useful
cryptographic protocols such as short digital signature or
identity-based encryption. However, evaluating these pairings relies
heavily on finite field arithmetic, and their computation in software is
still expensive. Developing hardware accelerators is therefore crucial.

So far, we proposed a hardware co-processor designed to accelerate the
computation of the Tate pairing in characteristics 2 and 3. We tried to
reduce the silicon footprint (or in our case the usage of FPGA
resources) of the circuit to ensure scalability, while trying to
minimize the impact on the overall performances. In this talk, we will
focus on the other end of the hardware design spectrum. We will describe
another co-processor architecture, designed to achieve much lower
computation times, at the expense of hardware resources.

Jeudi 5 mars 2009 à 10h15, salle 118 (1er étage du bâtiment GN1 de
l’ENSL). Exposé de Richard Leroy (IRMAR, Univ. Rennes 2) : Certificats
de positivité dans la base de Bernstein multivariée.

Résumé. On étudiera lors de cet exposé la positivité des polynômes réels
sur un simplexe. Plus adaptée à ce contexte, on utilisera la base de
Bernstein plutôt que la base traditionnelle des monômes. Les propriétés
géométriques de la base de Bernstein permettent d’obtenir un algorithme
qui décide si un polynôme f donné est positif, et qui, le cas échéant,
fournit un certificat de positivité, i.e. une expression de f qui rend
triviale sa positivité. Deux méthodes seront présentées : l’élévation du
degré et la subdivision du simplexe. Des résultats récents
d’approximation garantissent la correction de l’algorithme, qui est de
plus certifié. Nous donnerons également une borne sur sa complexité.

Jeudi 26 février 2009 à 13h30, salle A2 (4ème étage du bâtiment GN1 de
l’ENSL). Exposé d’Ivan Morel (Arénaire) : H-LLL : Un LLL flottant
vectoriel.

Résumé. La réduction de réseaux a d’importantes applications dans divers
domaines, comme la cryptologie, la théorie algorithmique des nombres,
l’arithmétique des ordinateurs, etc.. L’algorithme LLL permet de réduire
une base quelconque d’un réseau en une ‘bonne’ base en temps polynomial.
Cependant, la taille des entiers manipulés dans l’algorithme rend ce
dernier inutilisable en pratique dans sa version exacte quand la
dimension ou la taille des entrées augmente. Il est donc naturel de se
tourner vers le calcul flottant, particulièrement pour le calcul des
coefficients de Gram-Schmidt dont le coût domine le coût total de
l’algorithme. Je présenterai ainsi une nouvelle variante flottante de
l’algorithme LLL, H-LLL, utilisant l’algorithme de Householder flottant
pour calculer la décomposition matricielle QR et ainsi obtenir les
coefficients de Gram-Schmidt. J’introduirai les enjeux d’une telle
approche, ainsi que ses avantages/inconvénients par rapport aux
techniques existantes.

Jeudi 19 février 2009 à 13h, Amphi B (3ème étage du bâtiment GN1).
Exposé de Micaela Mayero (LIPN, Univ. Paris 13) : Des preuves et des
nombres
.

Résumé. La manipulation des nombres nous semble naturelle dès lors que
ceux-ci sont associés à la notion de calcul. Les preuves formelles
soulèvent la problématique supplémentaire de la formalisation de ces
nombres et de la préservation de leurs propriétes. Nous nous
intéresserons en particulier aux nombres entiers, relatifs, réels,
flottants, dans différents assistants de preuves, puis en nous appuyant
sur divers exemples, nous tenterons de présenter différentes approches
pour gérer ces nombres.

Jeudi 5 février 2009 à 10h15, salle du conseil du LIP. Exposé de
Sylvain Collange (DALI, ELIAUS, Université de Perpignan Via Domitia) : L’arithmétique
sur GPU en 2009.

Résumé. En l’espace de dix ans, les processeurs graphiques (GPU) ont vu
leurs unités de calcul évoluer progressivement depuis un format virgule
fixe sur 5 bits jusqu’à la double précision IEEE-754, au point de «
dépasser » aujourd’hui les processeurs généralistes. En effet, le GTX
280 de Nvidia intègre une unité FMA, traite les flottants dénormalisés
en matériel et permet de changer de mode d’arrondi instantanément,
contrairement aux processeurs x86 courants.

L’absence d’existant et les contraintes spécifiques aux tâches
graphiques ont conduit à des choix souvent différents de ceux effectués
pour les CPU généralistes.

Nous analyserons les approches suivies par les principaux constructeurs
pour leurs implantations matérielles respectives, et les conséquences
sur le développement logiciel des opérateurs non présents en matériel,
sur une plate-forme en évolution où le pire côtoie souvent le meilleur.

Jeudi 29 janvier 2009 à 14h, salle du conseil du LIP. Exposé d’Andrew
Novocin (ARITH, LIRMM) : A brief history of factoring polynomials.

Résumé. In this talk I will outline the algorithm’s for factoring
polynomials in Z[x] with an eye on arriving at my Ph.d result: a
new algorithm which is both the fastest algorithm in theory and the
fastest algorithm in practice.

Vendredi 16 janvier 2009 à 13h30, Salle du conseil du LIP. Exposé de
Jean-Yves L’Excellent (Graal, LIP, ENS Lyon) : Some numerical issues
in sparse direct solvers.

Résumé. Sparse direct solvers, thanks to their robustness, are methods
of choice to solve large sparse systems of linear equations of the form
A x = b, where A is a large sparse matrix. One of the
main requirements from applications consists in solving larger and
larger problems efficiently (memory usage, performance) but also
accurately. This objective in turn necessitates addressing certain
numerical issues through a careful design and implementation of
numerical functionalities.

In this talk, I will first introduce sparse direct solvers, then focus
on a few numerical issues that we have been confronted with in the MUMPS
project: error analysis, numerical pivoting, iterative refinement,
maximum weighted matching, preselection of pivots, scaling, null pivots,
rank detection, etc. If time permits, I will also show how parallelism
MUMPS can be obtained from the http://graal.ens-lyon.fr/MUMPS
and http://mumps.enseeiht.fr
URLs.

Vendredi 12 décembre 2008 à 10h15, salle du conseil du LIP. Exposé de
Marcelo Kaihara (École Polytechnique Fédérale de Lausanne) : Algorithms
for Modular Arithmetic in Public-key Cryptology.

Résumé. Processing public-key cryptologic systems requires a huge amount
of computation, and, there is therefore, a great demand for developing
dedicated hardware to speed up the computations. Modular multiplication
and division are the basic operations which are intensively used in
cryptology and number theoretic applications. In this talk, we will see
fast modular multiplication and division algorithms over integers for
public-key cryptology. The talk is mainly focused in algorithms where
computations are performed manipulating huge integers with simple
operations such as redundant addition/subtractions and shifts; in
particular we will see the mixed radix-4/2 hardware algorithm for
modular multiplication/division and the bipartite modular multiplication
method (BMM method). At the end of the talk, we will see some
contributions to the RNS Montgomery multiplication, a different approach
where large integers are decomposed into smaller integers and are
processed using more sophisticated operations.

Vendredi 5 décembre 2008 à 10h15, salle du conseil du LIP. Exposé de
Marc Mezzarobba (Algorithms, INRIA Paris-Rocquencourt) : Suites et
fonctions holonomes : évaluation numérique et calcul automatique de
bornes.

Résumé. Une famille d’algorithmes dus (dans leur version générale) à
David et Gregory Chudnovsky permet d’évaluer numériquement à grande
précision – pour fixer les idées, mille ou dix mille décimales – les
fonctions analytiques solutions d’équations différentielles linéaires à
coefficients polynomiaux. Le point de départ est un algorithme efficace
pour « dérouler » certaines suites récurrentes. Je présenterai ces
différents algorithmes, devenus pour la plupart classiques, ainsi que
quelques remarques sur leur complexité et leur implémentation.

Par ailleurs, je décrirai un procédé de calcul de bornes sur les suites
satisfaisant des récurrences linéaires à coefficients polynomiaux. Dans
le contexte des algorithmes précédents, celui-ci permet de contrôler
finement le nombre de termes des développements de Taylor à prendre en
compte pour garantir une certaine précision lors du prolongement
analytique numérique. Il s’agit d’un travail en cours avec mon directeur
de thèse Bruno Salvy.

Vendredi 28 novembre 2008 à 10h15, amphithéatre B (3ème étage du
bâtiment principal de l’ENS). Exposé de Xiaoye Sherry Li (Computational
Research Division, Lawrence Berkeley National Laboratory) : High
Precision Software and Application in Numerical Linear Algebra

Résumé. We have developed a number of portable high precision
floating-point arithmetic software packages, including double-double
precision, quad-double precision, and arbitrary precision. We will give
an overview of the data structures and the implementation techniques
used in these software. As an application, we will present the
algorithms, error bounds, and numerical results of extra-precise
iterative refinement for linear systems and linear least squares
problems. The algorithms require only limited use of extra precision and
add only O(n2) work to the O(n3) cost of LU, or
O(m n) work to the O(m n2) cost of QR. Our algorithms reduce
the forward normwise and componentwise errors to O(ε) unless the system
is too ill-conditioned. The extra precision is facilitated by the new
extended-precision BLAS standard in a portable way, and the algorithms
will be included in a future release of (Sca)LAPACK.

Joint work with David Bailey, Jim Demmel, Yozo Hida, Jason Riedy,
Brandon Thompson, and Meghana Vishvanath.

Mercredi 26 novembre 2008 à 14h, salle 116 (premier étage du bâtiment
principal de l’ENS Lyon). Exposé de Francisco Rodríguez-Henríquez
(Centro de investigación y de Estudios Avanzados del I.P.N) : Inverse
Frobenius and Frobenius Operator Computation over GF(pm).

Résumé. Where we have found some formulations and associated complexity
costs for the computation of the inverse Frobenius and Frobenius
operator of elements A of GF(pm). We give explicit formulae
when the finite field has been constructed using trinomials and
tetranomials in characteristic two and three, respectively.

This is a joint work with Omran Ahmadi.

Vendredi 21 novembre 2008 à 10h15, salle du conseil du LIP. Exposé de
Guillaume Revy (Arénaire) : A new binary floating-point division
algorithm and its implementation in software.

Résumé. We propose a new approach for software implementation of binary
floating-point division. This approach is based on the fast and accurate
evaluation of a particular bivariate polynomial. First, we will
introduce some sufficient conditions on approximation and rounding
errors to ensure correct rounding. Then we will explain how to generate
in an automatic way an efficient evaluation program for this particular
polynomial. We will also show how the resulting program has been
automatically validated using Sollya and Gappa.

Further, we will demonstrate the practical interest of our approach with
an insight into our implementation for the binary32 format (single
precision) and into some experiments done with the ST200 compiler from
STMicrolelectronics.

Vendredi 7 novembre 2008 à 10h15, salle du conseil du LIP. Exposé de
Jean-Michel Muller (Arénaire) : On the computation of
correctly-rounded sums.

Résumé. The computation of sums appears in many domains of numerical
analysis. We show that among the set of the algorithms with no
comparisons performing only floating-point operations, the 2Sum
algorithm introduced by Knuth is optimal, both in terms of number of
operations and depth of the dependency graph. We also show that, under
reasonable conditions, it is impossible to always obtain the correctly
rounded-to-nearest sum of n>= 3 floating-point numbers with an
algorithm without tests performing only round-to-nearest
additions/subtractions. Boldo and Melquiond have proposed an algorithm
to compute the rounded-to-nearest sum of three operands, introducing a
new rounding mode unavailable on current hardware, rounding to odd; but
their simulation of rounding to odd requires tests. We show that
rounding to odd can be be realized using only floating-point
additions/subtractions performed in the standard rounding modes and a
multiplication by the constant 0.5, thus allowing the rounded-to-nearest
sum of three floating-point numbers to be determined without tests.
Starting from the algorithm due to Boldo and Melquiond, we also show
that the sum of three floating-point values rounded according to any of
the standard directed rounding modes can be determined using only
additions/subtractions, provided that the operands are of the same sign.

Vendredi 17 octobre 2008, soutenance de thèse de Christoph Lauter
(Arénaire) à 14h en salle des thèses : Arrondi correct de fonctions
mathématiques – fonctions univariées et bivariées, certification et
automatisation.

Résumé. Cette thèse élargit l’espace de recherche accessible en pratique
pour des implantations de fonctions mathématiques correctement arrondies
en virgule flottante. Elle passe d’abord de l’arrondi correct de
fonctions univariées comme log à des familles de fonctions univariées xn,
puis à la fonction bivariée xy. Une approche novatrice pour
la détection de cas de frontière d’arrondi de xy à l’aide
d’une information partielle sur les pires cas de cette fonction est
proposée. Cette innovation provoque un gain en vitesse conséquent.

Ensuite, cette thèse propose des approches automatiques pour certifier
une implantation de fonction correctement arrondie. Pour la
certification des bornes d’erreur d’approximation, un nouvel algorithme
pour le calcul de normes infinies certifiées est présenté et mis en
pratique. Puis les erreurs d’arrondi dans une implantation de fonction
mathématique sont analysées par des techniques développées pour l’outil
de preuve formelle Gappa.

Finalement, des algorithmes sont développés qui permettent
l’automatisation de l’implantation de fonctions. Une première mise en
½uvre dans l’outil Sollya permet de générer et certifier, sans aucune
intervention humaine, le code pour évaluer une fonction mathématique. À
l’aide d’un tel outil automatique, de larges espaces de recherches
peuvent être parcourus afin d’optimiser une implantation. Au futur, une
intégration de ces techniques dans un compilateur est envisageable.

Vendredi 17 octobre 2008 à 10h15, amphithéatre B. Exposé de Marius
Cornea (Intel, Portland) : Intel(R) MKL Overview.

Abstract. The Intel(R) Math Kernel Library is a collection of highly
optimized and parallelized mathematical functions for scientific,
engineering, and financial applications. It covers several function
domains, mostly in areas where performance is critical and users want to
exploit Intel Architecture resources as much as possible. Linear
algebra, Fourier transforms, vector math functions, and vector
statistical functions are just some of the areas covered. The
presentation will cover functionality and performance aspects of the
library, as well as possible future directions of development.

Vendredi 10 octobre 2008 à 10h15, salle du conseil du LIP. Exposé de
Sylvain Chevillard (Arénaire) : Approximation rationnelle efficace
pour le matériel.

Résumé. Pour implémenter une fonction mathématique en matériel, on passe
souvent par une approximation polynomiale de la fonction. C’est cette
approximation polynomiale qui est évaluée par la suite. On utilise
préférentiellement un polynôme parce qu’il s’agit d’un objet bien
maîtrisé et qu’on sait évaluer très efficacement. Par contre,
l’approximation par des fractions rationnelles est peu utilisée :
l’évaluation d’une fraction nécessite une division ; c’est une opération
trop coûteuse en général pour qu’une approximation rationnelle se révèle
intéressante. Dans les années 70, M. Ercegovac a proposé une méthode
permettant d’évaluer sans division certaines fractions rationnelles.
Cependant, la fraction rationnelle qui minimise l’erreur d’approximation
à une fonction donnée n’a aucune raison de satisfaire les contraintes
imposées par la méthode d’Ercegovac. Aussi, cette méthode a été peu
utilisée en pratique.

Dans cet exposé, nous reviendrons sur la méthode en question, et nous
verrons un algorithme permettant de trouver de très bonnes
approximations rationnelles satisfaisant ces contraintes particulières.

Il s’agit d’un travail en collaboration avec N. Brisebarre, M. Ercegovac
(UCLA), J.-M. Muller et S. Torres.

Vendredi 26 septembre 2008 à 10h15, salle du conseil du LIP. Exposé de
Laurent Fousse (CASYS, LJK, Univ. J. Fourier, Grenoble) : Réduction
modulaire simultanée et substitution de Kronecker pour les petits
corps finis.

Résumé. Je présenterai des algorithmes pour réaliser efficacement des
multiplications polynomiales modulaires ou des produits scalaires dans
un seul mot machine. Les polynômes sont représentés en associant
plusieurs coefficients dans un seul entier (par substitution de
Kronecker) que l’on manipule avec l’arithmétique machine (entière ou
flottante). En respectant certaines bornes sur le degré et la taille des
coefficients que j’expliciterai, il est possible d’utiliser
l’arithmétique machine directement et de réduire le nombre de
conversions. J’expliquerai aussi comment extraire rapidement les valeurs
des coefficients. Ces techniques conduisent à des gains d’un facteur
constant substantiel pour la multiplication polynomiale, l’algèbre
linéaire des corps premiers et l’arithmétique des corps finis de petit
degré.

Il s’agit d’un travail en collaboration avec Jean-Guillaume Dumas et
Bruno Salvy.

# 2007-2008

Vendredi 11 juillet 2008 à 10h15, salle du conseil du LIP. Exposé de
Hong Diep Nguyen (Arénaire) : Résoudre et certifier la solution d’un
système linéaire.

Résumé. Je présenterai une approche pour résoudre un système linéaire et
simultanément certifier la solution calculée. Par certifier, on entend
calculer un encadrement garanti de l’erreur. Pour cela, nous passons de
l’arithmétique flottante à l’arithmétique par intervalles et nous
résolvons le système linéaire satisfait par le résidu. Cela nous donne
une borne garantie de l’erreur sur le résultat exact.

Nous avons adapté une méthode de raffinement itératif pour le calcul de
la borne d’erreur. La combinaison de ces deux composantes de base, à
savoir la résolution en arithmétique flottante d’un système linéaire et
le raffinement itératif de la borne d’erreur en utilisant l’arithmétique
par intervalle, produit une solution plus précise dotée d’une borne
d’erreur. Cette borne d’erreur nous permet d’estimer en outre le nombre
de chiffres corrects de la solution approchée.

Notre approche est implantée en utilisant les bibliothèques MPFR et
MPFI. Ces deux bibliothèques nous permettent d’adapter la précision
utilisée à chaque étape. Cela nous a permis d’étudier aussi l’effet de
la précision utilisée pour le calcul du résidu sur la qualité du
résultat calculé. Les résultats expérimentaux illustrent le gain au
niveau de la qualité de la solution et de la borne d’erreur lié à la
précision utilisée pour les calculs.

Lundi 30 juin 2008 à 15h, salle du conseil du LIP. Exposé de Francisco
José Jaime Rodríguez (Universidad de Málaga, Departamento de
Arquitectura de Computadores) : SIMD instructions & range
reduction architectures.

Résumé. This speech is divided in two well differentiated parts:

On one hand, we will see some new SIMD instructions which have been
devised in order to work in conjunction with existing multimedia
extensions like MMX or 3DNow!, among others. These new instructions are
aided by a specialized look-up table in their purpose to improve
computation performance for multimedia applications.

On the other hand, we will also see an example of adapting an ASIC
architecture for a range reduction calculation into another architecture
intended to be implemented into an FPGA. We will have an overview of the
specific range reduction algorithm we are going to implement and how can
we change the ASIC architecture looking for an efficient FPGA
implementation by: (a) using hardware components which better suits into
an FPGA, and (b) making a low level design and placing those components
at the best places for a particular FPGA.

Vendredi 27 juin 2008 à 10h15, salle du conseil du LIP. Exposé de Ned
Nedialkov (McMaster University, Canada) : An overview on interval
methods for ordinary differential equations.

Résumé. An interval method for initial value problems (IVPs) in ordinary
differential equations (ODEs) has two important advantages over
approximate ODE methods: when it computes a solution to an ODE problem,
it (1) proves that a unique solution exists and (2) produces rigorous
bounds that are guaranteed to contain it. Such bounds can be used to
ensure that this solution satisfies a condition in a safety-critical
calculation, to help prove a theoretical result, or simply to verify the
accuracy and reliability of the results produced by an approximate ODE
solver.

We overview interval methods and software for IVP ODEs, discuss
applications of these methods, and present the author’s interval ODE
solver VNODE-LP.

Computing rigorous bounds in IVPs for differential-algebraic equations
(DAEs) is a substantially more challenging task than in ODEs. A
promising approach is Pryce’s structural analysis combined with Taylor
series methods. We outline this approach and discuss recent
developments.

Vendredi 13 juin 2008 à 10h15, salle B1 (4ème étage). Exposé de
Frédéric Messine (ENSEEIHT-IRIT) : Optimisation Globale basée sur
l’Arithmétique d’Intervalles et l’Arithmétique Affine.

Résumé. L’analyse d’intervalles est un outil introduit par R.E. Moore en
1966 pour contrôler les erreurs numériques générées par les calculs
flottants sur ordinateur. Depuis les années 1980, cet outil est
largement utilisé avec des méthodes de type Branch and Bound pour la
résolution de problèmes d’optimisation globale continus avec ou sans
contraintes. Dans cet exposé, je présenterai diverses techniques
dérivées de l’arithmétique d’intervalles pour calculer des bornes d’une
fonction considérée sur un hypercube. Notamment l’arithmétique affine et
ses extensions seront exposées en détail. Je montrerai comment ces
techniques peuvent être utilisées avec efficacité pour résoudre des
problèmes sans contrainte. Dans le cas de problèmes avec contraintes,
j’exposerai les principes de propagation de contraintes issues des CSP
(Constrained Satisfaction Programming) continus et je montrerai leur
impact sur un exemple industriel simple. Très récemment, nous avons
utilisé l’aritmétique affine dans les problèmes avec contraintes afin de
générer automatiquement des programmes linéaires relâchés et dont la
résolution par C-Plex donne des bornes très précises du problème primal
considéré. Je conclurai en présentant des premiers résultats numériques
de cette nouvelle technique de reformulation linéaire.

Vendredi 6 juin 2008 à 10h15, salle du conseil du LIP. Exposé d’Ivan
Morel : D’une base LLL-réduite à une autre.

Résumé. La réduction de réseaux a d’importantes applications dans divers
domaines aussi bien en mathématiques qu’en informatique : algèbre
computationnelle, cryptologie, théorie algorithmique des nombres,
communications, arithmétique des ordinateurs, etc. L’algorithme LLL
permet de réduire une base quelconque d’un réseau en une ‘bonne’ base en
temps polynomial. Cependant la qualité de la base obtenue dépend
directement d’un paramètre auxiliaire : delta (le facteur 3/4 de
l’algorithme original). Comme présenté par LaMacchia dans sa thèse, une
technique intéressante de réduction consiste à réduire la base avec un
delta petit (de manière à obtenir une réduction aussi rapide que
possible), puis à augmenter la valeur de delta afin d’obtenir la qualité
maximale atteignable par LLL. Je présenterai ainsi un algorithme qui
améliore la qualité d’une base LLL- réduite en augmentant la valeur de
delta, dont la complexité, de même exposant global que celle de
l’algorithme LLL, est essentiellement indépendante de la taille binaire
des coordonnées.

Jeudi 15 Mai 2008 à 14h30, salle du conseil du LIP. Exposé de Siegfried
M. Rump (Institute for Reliable Computing, Hamburg University of
Technology) : Computer-assisted proofs and self-validating methods.

Résumé. Methods will be discussed how the computer can aid in
mathematical proofs or, more general, how undoubtedly true results can
be obtained with the aid of digital computers. In particular we will
discuss methods from computer algebra and so-called self-validating
methods. The latter can be accessed via INTLAB, the Matlab toolbox for
reliable computing. If time permits, some demonstration will be given.

Mercredi 30 avril 2008 à 10h15, salle du conseil du LIP. Exposé de
Nicolas Louvet : Algorithmes compensés pour l’évaluation de
polynômes.

Résumé. Comment améliorer la précision d’un résultat calculé en
arithmétique flottante, tout en conservant de bonnes performances
pratiques ? La compensation des erreurs d’arrondis commises par un
algorithme en arithmétique flottante est l’une des approches permettant
de répondre à cette question récurrente. Dans cet exposé, je passerai en
revue les différents résultats que nous avons obtenus concernant
l’évaluation de polynômes par des algorithmes compensés. Je présenterai
également une étude du parallélisme d’instructions présent dans le
schéma de Horner compensé, étude qui permet d’en expliquer les bonnes
performances pratiques.

Vendredi 25 avril 2008 à 10h15, salle du conseil du LIP. Exposé de
Guillaume Melquiond (Projet Composants Mathématiques, Laboratoire
INRIA-Microsoft Research) : L’arithmétique flottante comme outil de
preuve formelle.

Résumé. Dans certains systèmes formels, les mécanismes de conversion et
de réflexion permettent l’évaluation “rapide” de fonctions et
l’utilisation de leur valeurs au sein de preuves. Ces mécanismes peuvent
alors remplacer avantageusement l’approche déductive traditionnelle en
substituant des calculs aux applications de théorème.

Afin d’adapter cette approche aux preuves manipulant des nombres réels,
une bibliothèque d’arithmétique flottante a été développée pour
l’assistant de preuves Coq. Elle fournit les opérateurs arithmétiques de
base et quelques fonctions élémentaires, le tout en multi-précision et
en base quelconque. Les calculs sont effectifs et réalisés au sein du
formalisme de Coq.

Une bibliothèque d’arithmétique d’intervalles a été construite par
dessus. En conjonction avec des méthodes de différentiation, elle offre
à l’utilisateur des tactiques Coq permettant de prouver automatiquement
des inégalités sur des expressions à valeurs réelles.

Vendredi 18 avril 2008 à 10h15, salle du conseil du LIP. Exposé de
Nicolas Meloni (Groupe de Recherche en Informatique et Mathématiques,
Université de Toulon) : Chaînes d’additions différentielles et
multiplication de point sur les courbes elliptiques.

Résumé. Depuis leur introduction au milieu des années 80, les courbes
elliptiques sont devenues l’un des principaux outils de la cryptographie
moderne. La principale opération (en termes de temps de calcul) de tout
protocole utilisant les courbes elliptiques est la multiplication de
point par un scalaire: le calcul de k*P=P+…+P, où k est un entier et P
un point de la courbe. Afin d’effectuer cette opération de la manière la
plus efficace possible, de nombreuses méthodes ont été proposées. Elles
reposent, en général, sur l’algorithme dit “double-and-add” consistant à
effectuer des séries de doublements entrecoupées d’additions, en
fonction de la représentation binaire du scalaire k.

Dans cet exposé nous allons sortir des sentiers battus en nous
intéressant à des méthodes de multiplication de point basées sur les
chaînes d’additions différentielles. Celles-ci ont la particularités
d’être principalement composées d’additions (au lieu de doublements),
entrainant un plus grand nombre d’étapes de calcul, compensé par le fait
qu’il est possible d’obtenir, sous certaines conditions, une addition
plus efficace qu’un doublement. Nous verrons que cette approche apparait
comme très prometteuse de prime abord, mais que certains problèmes
(concernant la taille des chaînes notamment) restent à résoudre afin
d’envisager une utilisation concrête.

Vendredi 11 avril 2008 à 10h15, salle du conseil du LIP. Exposé de
David Defour (Laboratoire ELIAUS, Équipe DALI, Université de Perpignan).
Gestion des threads dans le G80 et le R600 et évaluation polynomiale
dans l’unité de filtrage.

Résumé. Les processeurs graphiques ou GPU sont composés, au sein de ce
que l’on appelle le pipeline graphique, de plusieurs unités spécialisées
interpolateur, unité de filtrage, …). L’exploitation du pipeline
graphique implique une gestion de plusieurs milliers de threads
concurrents destinés à tirer parti du parallélisme de données présent
naturellement dans les applications graphiques. Nous présenterons
comment les architectures G80 de chez Nvidia et R600 d’AMD gèrent en
matériel ces threads et les problèmes liés. Enfin nous verrons comment
détourner un type d’unité spécialisé dans le filtrage de texture, pour
évaluer en matériel un polynôme de degré 3.

Vendredi 4 avril 2008 à 10h15, salle du conseil du LIP. Exposé de Peter
Kornerup (Dept. of Math. & Comp. Science, SDU/Odense University) : Correcting
the Normalization Shift of Redundant Binary Representations

Résumé. An important problem in the realization of floating point
subtraction is the identification of the position of the first non-zero
digit in a radix represented number, since the significand usually is to
be represented left normalized in the part of the word(s) allocated for
representing its value. There are well-known log-time algorithms for
determining this position for numbers in non-redundant representations,
which may also be applied to suitably (linear-time) transformed
redundant representations. However, due to the redundancy in the latter
case the position thus determined may need a correction by one. When
determination of the shift amount is to be performed in parallel with
conversion to non-redundant representation (the subtraction), it must be
performed on the redundant representation. This is also the case when
the significand is to be retained in a redundant representation until
the final rounding. This talk presents an improved algorithm for
determining the need for a correction of the normalization shift amount,
which can be run in parallel with the algorithm finding the
“approximate” position.

Vendredi 28 mars 2008 à 10h15, salle du conseil du LIP. Exposé de
Nathalie Revol : Adaptation automatique de la précision de calcul.

Résumé. Lorsqu’un calcul donné ne fournit pas le résultat avec la
précision voulue, on peut effectuer ce calcul (soit le recommencer, soit
le poursuivre) avec une précision plus grande. Une question qui se pose
alors tout naturellement est de savoir comment augmenter la précision de
calcul afin de minimiser le surcoût en temps de calcul. La base de la
comparaison est le temps du calcul effectué en supposant que l’on
utilise la précision optimale requise.

Kreinovich et Rump ont montré que lorsque l’on recommence complètement
les calculs, alors le surcoût minimal est un facteur 4. Dans cette
présentation, nous étudierons le cas où le calcul peut se poursuivre en
utilisant des résultats obtenus avec une précision inférieure. Nous
montrerons tout d’abord que l’on peut obtenir un surcoût inférieur à 4
dans un tel cas : par exemple le surcoût est de 2 pour l’algorithme de
Newton. Ensuite nous présenterons une stratégie asymptotiquement
optimale pour adapter la précision de calcul dans ce cas où les
résultats précédents peuvent être réutilisés, qui a un surcoût qui tend
vers 1 lorsque la précision optimale (inconnue a priori) tend
vers l’infini.

Mercredi (attention,
ce n’est pas un vendredi)
26 mars 2008 à 10h15,
salle du conseil du LIP. Exposé de Thomas Plantard (University of
Wollongong, Australia). Cryptographie basée sur CVP en norme
infinie.

Résumé. In Crypto 1997, Goldreich, Goldwasser and Halevi (GGH) proposed
a lattice analogue of McEliece public key cryptosystem, which security
is related to the hardness of approximating the closest vector problem
(CVP) in a lattice. Furthermore, they also described how to use the same
principle of their encryption scheme to provide a signature scheme.
Practically, this cryptosystem uses the euclidean norm, Euclidean norm,
which has been used in many algorithms based on lattice theory.
Nonetheless, many drawbacks have been studied and these could lead to
cryptanalysis of the scheme. We present a novel method of reducing a
vector under the max-norm and propose a digital signature scheme based
on it. Our scheme takes advantage of the max-norm to increase the
resistance of the GGH scheme and to decrease the signature length.

Vendredi 14 mars 2008 à 10h15, salle du conseil du LIP. Exposé de
Vincent Lefèvre : Approximation d’une fonction f par des polynômes
pour le calcul approché des valeurs successives f(0), f(1), f(2)…

Résumé. Je présenterai comment approcher efficacement une fonction f par
des polynômes en vue d’en calculer les valeurs approchées successives
f(0), f(1), f(2)…, pour un très grand nombre d’arguments (par exemple,
2^40). Pour que le calcul soit rapide, le degré des polynômes doit être
petit (et même en pratique égal à 1 pour l’application visée, qui est la
recherche des pires cas pour l’arrondi correct de la fonction). Mais
pour que le temps total de calcul des approximations ne soit pas trop
grand, il faut que les intervalles d’approximation soient grands. Je
montrerai comment satisfaire ces deux contraintes a priori
contradictoires. Je montrerai également comment représenter les
coefficients approchés, décrirai l’arithmétique utilisée, et donnerai
des exemples obtenus en pratique.

Vendredi 7 mars 2008 à 10h15, salle 117 (attention, ce n’est pas la
salle du conseil). Exposé de Florent de Dinechin : Autour de la
multiplication par une constante, présentation du projet FloPoCo.

Résumé. Nos travaux sur FPLibrary ont montré les limites du langage VHDL
pour décrire des opérateurs matériels complexes, notamment des fonctions
élémentaires. Depuis quelques années, on écrit des générateurs de VHDL.
FloPoCo (pour Floating-Point Cores) est une tentative de ramener dans un
seul cadre logiciel tous ces prototypes de générateurs que nous avions à
droite ou à gauche: HOTBM, les générateurs d’exp et de log par réduction
d’argument itérative, etc. Au passage, on va y ajouter de nouveaux
opérateurs. Je prendrai l’exemple de la multiplication par une constante
entière, flottante ou même réelle.

Vendredi 29 février 2008 à 10h15, salle du conseil du LIP. Exposé
d’Olivier Bouissou (LIX, CEA Saclay) : Utilisation de l’intégration
garantie pour la vérification de systèmes hybrides.

Résumé. La présence sans cesse croissante de programmes informatiques
(i.e. de systèmes discrets) dans les systèmes embarqués (i.e.
communicant avec un environnement continu) a rendu nécessaire l’étude et
l’analyse de systèmes hybrides. Dans cet exposé, nous expliquerons
comment on peut étendre l’analyse statique classique de programmes pour
les systèmes hybrides en prenant en compte ses interactions avec le
milieu continu.

Pour analyser la partie continue, il est nécessaire de calculer des
bornes garanties sur les solutions d’équations différentielles. La
théorie de l’intégration garantie apporte les outils nécessaires pour
cela. Nous présenterons les principes généraux qui fondent cette théorie
puis nous décrirons en détail la méthode GRKLib, un nouvel algorithme
basé sur la méthode de Runge-Kutta d’ordre 4. Nous insisterons notament
sur les techniques utilisées pour prouver la correction de la méthode en
présence de nombres flottants. Enfin, nous comparerons GRKLib à VNODE,
l’outil d’intégration garantie le plus utilisé.

Vendredi 15 février 2008 à 10h15, salle du conseil du LIP. Exposé de
Claude-Pierre Jeannerod : Algorithmes parallèles pour l’évaluation
précise de certaines fonctions algébriques.

Résumé. Dans cet exposé, je présenterai une approche relativement
uniforme pour l’évaluation avec arrondi correct des fonctions
algébriques de base : inversion, division, racines k-ièmes et leurs
inverses. Cette approche, qui repose sur l’évaluation de polynômes
bivariés particuliers, permet d’exprimer davantage de parallélisme que
les approches plus classiques, à base d’itérations ou de polynômes
univariés. En pratique, on verra que cela peut conduire à des
algorithmes d’évaluation polynomiale à la fois efficaces et suffisamment
précis pour garantir l’arrondi correct des fonctions en question.

Vendredi 8 février 2008 à 10h15, salle du conseil du LIP. Exposé de
Nicolas Julien (Marelle, INRIA Sophia) : Arithmétique réelle exacte
certifiée.

Résumé. La co-induction en Coq fournit un cadre de travail confortable
pour décrire des algorithmes certifiés pour l’arithmétique réelle
exacte. La bibliothèque que nous décrivons s’inspire d’expériences
précédentes utilisant des séquences infinies de chiffres redondants en
base 2 et les généralise par l’utilisation de bases entières
arbitraires. L’objectif est de pouvoir utiliser les opérations rapides
des entiers natifs et de fournir ainsi des calculs efficaces sur les
nombres réels à l’interieur du système Coq.

Nous décrirons la représentation utilisée ainsi que quelques
algorithmes, en par