The calendar of these seminars can be subscribed to by clicking here.

Next seminars

The seminar is over for 2018-2019. We will resume in September.

Théo Mary

October 3, 2019 at 10h15, room M7-315 (3rd floor, Monod)


Miruna Rosca (AriC)

July 4, 2019 at 10h15, room M7-315 (3rd floor, Monod)

On the Middle-Product Learning With Errors Problem and its applications in cryptography

In this talk, we introduce a new variant MP-LWE of the Learning With Errors problem (LWE) making use of the middle product between polynomials modulo an integer q, we exhibit a reduction from the Polynomial-LWE problem (PLWE) parametrized by a polynomial f, to MP-LWE, which works for a large family of polynomials, and we analyze the applications of MP-LWE in cryptography.
This is joint work with A. Sakzad, D. Stehlé and R. Steinfeld.

François Morain (LIX, École polytechnique)

June 13, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Fractions continues avec des algorithmes rapides

Les fractions continues sont un outil très important pour l’approximation des nombres réels, avec de nombreuses applications en théorie algorithmique des nombres ainsi qu’en cryptanalyse. Une des applications importantes est l’algorithme de Cornacchia qui résout élégamment le problème de la représentation des nombres premiers sous la forme p=x^2+d y^2$ avec d > 0. Cet exposé présentera l’utilisation des algorithmes rapides de pgcd d’entiers pour fournir une version rapide de l’algorithme de Cornacchia.

Vincent Neiger (XLIM, Université de Limoges)

May 2, 2019 at 10h15, room M7-315 (3rd floor, Monod)

On the complexity of modular composition of generic polynomials

This talk is about algorithms for modular composition of univariate polynomials, and for computing minimal polynomials. For two univariate polynomials \(a\) and \(g\) over a commutative field, modular composition asks to compute \(h(a) \bmod g\) for some given \(h\), while the minimal polynomial problem is to compute \(h\) of minimal degree such that \( h(a) = 0 \bmod g\). For generic \(g\) and \(a\), we propose algorithms whose complexity bound improves upon previous algorithms and in particular upon Brent and Kung’s approach (1978); the new complexity bound is subquadratic in the degree of \(g\) and \(a\) even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals, and from efficient operations with these bases thanks to fast univariate polynomial matrix algorithms. We will also report on software development and comment on implementation results for the main building blocks in our composition algorithm.
Contains joint work with Seung Gyu Hyun, Bruno Salvy, Eric Schost, Gilles Villard.

Bruno Grenet (ECO, LIRMM)

April 11, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Multiplications polynomiales sans mémoire

Le problème de la multiplication de polynômes a été très étudié depuis les années 1960, et différents algorithmes ont été proposés pour atteindre les meilleures complexités en temps.
Plus récemment, certains de ces algorithmes ont été étudiés du point de vue de leur complexité en espace, et modifiés pour n’utiliser aucun espace supplémentaire autre que les entrées et sorties, tout en gardant la même complexité en temps asymptotiquement.
Dans ce travail, nous étendons ces résultats de deux façons. D’une part, nous nous demandons si tout algorithme de multiplication polynomiale admet une variante « en place », c’est-à-dire n’utilisant aucun espace supplémentaire, de manière générique. D’autre part, nous considérons deux variantes importantes de ce problème qui ne produisent qu’une partie du résultat, les produits dits court et médian, et nous nous demandons si ces opérations peuvent également être effectuées en place.
Pour répondre de manière (essentiellement) affirmative à ces deux questions, nous proposons une série de réductions ayant comme point de départ n’importe quel algorithme de multiplication de complexité en espace linéaire. Pour le produit complet et le produit court, ces réductions fournissent des variantes en place des algorithmes avec la même complexité en temps asymptotiquement. Pour le produit médian, la réduction implique un facteur logarithmique supplémentaire dans la complexité en temps, quand celle-ci est quasi-linéaire.
Travail en commun avec Pascal Giorgi et Daniel Roche

Damien Stehlé (AriC)

April 4, 2019 at 10h15, room M7-315 (3rd floor, Monod)

A survey on security foundations of fast lattice-based cryptography

The Learning With Errors problem (LWE) captures the asymptotic hardness of some standard lattice problems, and enables the design of cryptographic schemes. However, these LWE-based schemes are relatively inefficient.
To address this issue, algebraic variants of LWE have been introduced, such as Polynomial-LWE, Ring-LWE, Module-LWE and MiddleProduct-LWE, whose definitions involve polynomial rings and number fields.
In this talk, I will survey the state of the art on these problems.

Jean-Michel Muller (AriC)

March 7, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Error analysis of some operations involved in the Fast Fourier Transform

We are interested in obtaining error bounds for the classical FFT algorithm in floating-point arithmetic, for the 2-norm as well as for the infinity norm. For that purpose we also give some results on the relative error of the complex multiplication by a root of unity, and on the largest value that can take the real or imaginary part of one term of the FFT of a vector x, assuming that all terms of x have real and imaginary parts less than some value b.
This is a joint work with N. Brisebarre, M. Joldes, A.-M. Nanes and J. Picot.

Assia Mahboubi (Gallinette, INRIA Rennes – Bretagne Atlantique, LS2N)

February 14, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Formally Verified Approximations of Definite Integrals

In this talk, we discuss the problem of computing values of one-dimensional definite integrals, with the highest possible guarantee of correctness. Finding an elementary form for an antiderivative is often not an option, and numerical integration is a common tool when it comes to making sense of a definite integral. Some of the numerical integration methods can even be made rigorous: not only do they compute an approximation of the integral value but they also bound its inaccuracy. These rigorous algorithms are implemented in software like INTLAB, VNODE-LP, Arb, etc. But the highest possible guarantee of correctness on such approximations, even those obtained by rigorous means, would in fact be provided by a formal proofs, machine-checked using a proof assistant. Proof assistants are pieces of software for representing mathematical definitions, statements, algorithms and proofs in a digital form, suitable for computer processing. In particular, they can be used to devise formal-proof-producing implementations of programs. But numerical integration is still missing from the toolbox when performing formal proofs. We thus describe and discuss a method for automatically computing and proving bounds on some definite integrals, implemented inside the Coq proof assistant.
This is a joint work with Guillaume Melquiond and Thomas Sibut-Pinote.

Ida Tucker (AriC)

February 7, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Practical fully secure unrestricted inner product functional encryption modulo a prime p

Functional encryption (FE) is an advanced cryptographic primitive which allows, for a single encrypted message, to finely control how much information on the encrypted data each receiver can recover. To this end many functional secret keys are derived from a master secret key. Each functional secret key allows, for a ciphertext encrypted under the associated public key, to recover a specific function of the underlying plaintext.
However constructions for general FE are far from practical, or rely on non-standard and ill-understood cryptographic assumptions.
In this talk I will focus on the construction of efficient FE schemes for linear functions (i.e. the inner product functionality), and the framework in which our constructions hold. Such schemes yield many practical applications, and our constructions are the first FE schemes for inner products modulo a prime that are both efficient and recover the result whatever its size. I will also describe an instantiation of the framework in using class groups of imaginary quadratic fields.
This is a joint work with Guilhem Castagnos and Fabien Laguillaumie.

Éric Goubault (LIX, École Polytechnique)

January 24, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Finding Positive Invariants of Polynomial Dynamical Systems – some experiments

Synthetising positive invariants of non-linear ODEs, switched systems or even hybrid systems is a hard problem that has many applications, from control to verification. In this talk, I will present two « exercices de style » for dealing with it, revisiting the classical Lyapunov function approach. The first one is based on algebraic properties of polynomial differential systems (Darboux polynomials, when they exist), for finding polynomial, rational or even some log extensions to rational functions whose level sets or sub-level sets describe positive invariants of these systems, or provide interesting « change of bases » for describing their solutions. The second one is based on topological properties (Wazewski property, mostly) which ensure the existence, in some region of the state space, of a non-empty maximal invariant set. The interest is that there is then in general no need to find complicated functions for precisely describing the invariant set itself, instead we rather use simple template shapes in which a possibly very complicated invariant set lies. The topological criterion can be ensured by suitable SoS relaxations, for polynomial differential systems, that can be implemented using LMI solvers. 

Alain Passelègue (AriC)

January 17, 2019 at 10h15, room M7-315 (3rd floor, Monod)

New candidate pseudorandom functions and their applications

In this talk, I will present new and simple candidate pseudorandom functions (PRFs) introduced in a recent work. In this work, we depart from the traditional approaches for building PRFs used in provable security or in applied cryptography by exploring a new space of plausible PRF candidates. Our guiding principle is to maximize simplicity while optimizing complexity measures that are relevant to advanced cryptographic applications. Our primary focus is on weak PRFs computable by very simple circuits (depth-2 ACC circuits).
The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw many natural connections between their hardness and questions in complexity theory or learning theory. On the applied side, the piecewise-linear structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). In particular, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity compared to protocols obtained by combining standard MPC protocols with practical PRFs (included MPC-friendly ones).
Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We give a candidate EI-PRF based on our main weak PRF candidate.

Joint work with Dan Boneh, Yuval Ishai, Amit Sahai, and David J. Wu, published at TCC 2018

Chee Yap (New-York University)

January 9, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Subdivision Path Planning in Robotics: Theory and Practice

Motion planning is a fundamental problem in robotics. We propose to design path planners based on three foundations:
(1) The notion of “resolution-exact” planners. Conceptually, it avoids the zero problem of exact computation.
(2) The use of “soft predicates” for achieving such algorithms in the subdivision approach.
(3) The “feature-based technique” for constructing such soft predicates.
We formulate an algorithmic framework called “Soft Subdivision Search” (SSS) that incorporates these ideas. There are many parallels between our framework and the well-known Sampling or Probabilistic Roadmap framework. Both frameworks lead to algorithms that are
* practical
* easy to implement
* flexible and extensible
* with adaptive and local complexity
In contrast to sampling and previous resolution approaches, SSS confers strong theoretical guarantees, including halting.

In a series of papers we demonstrated the power of these ideas, by producing planners for planar robots with 2, 3 and 4 degrees of freedom (DOF) that outperform or matches state-of-art sampling-based planners. Most recently, we produced a planner for two spatial robots (rod and ring) with 5 DOFs. Non-heuristic planners for such robots has been considered a challenge for the subdivision approach. We outline a general axiomatic theory underlying these results, including subdivision in non-Euclidean configuration spaces,

Joint work with Y.J. Chiang, C.H. Hsu, C. Wang, Z. Luo, B. Zhou, J.P. Ryan.

Elena Kirshanova (AriC)

December 13, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Practical sieving algorithms for the Shortest Vector Problem

In this talk I present recent results on sieving algorithms for the Shortest Vector Problem. First, I explain why this problem is important and how sieving algorithms work. Then, I present recent advances in memory efficient versions of sieving algorithms. I explain locality-sensitive techniques for these types of algorithms. The part of the talk is based on joint works with Gottfried Herold and Thijs Laarhoven. Finally, I present recent advances in practical aspects of sieving algorithm for SVP. I describe technical challenges that arise when one tries to make sieving algorithms practical, and how one can overcome some of them. This part of the talk is on-going work with Martin R. Albrecht, Leo Ducas, Gottfried Herold, Eamonn W. Postlethwaite, Marc Stevens.

Nicolas Brunie (Kalray)

December 6, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Overview of arithmetic at Kalray: metalibm and the rest

Kalray’s version of Metalibm “lugdunum” has recently been open sourced. It is an interesting tool to developp elementary functions. In this presentation we will present the tool and show how it can be used to explore the design space of a few elementary functions. Then we will present in more details how Metalibm is used at Kalray to developp both the next generation Hardware and the mathematical libraries through the example of CRC reduction and OpenCL-C (kernel code) elementary functions. Finally we will survey the arithmetic at Kalray outside Metalibm through a description of the next generation processor and what is envisioned for the future.

Sylvie Putot (LIX, École Polytechnique)

November 29, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Forward Inner-Approximated Reachability of Non-Linear Continuous Systems

We propose an approach for computing inner-approximations of reachable sets of dynamical systems defined by non-linear, uncertain, ordinary differential equations. This is a notoriously difficult problem, much more intricate than outer-approximations, for which there exist well known solutions, mostly based on Taylor models.  Our solution builds on rather inexpensive set-based methods, namely a generalized mean-value theorem combined with Taylor models outer-approximations of the flow and its Jacobian with respect to the uncertain inputs and parameters. The combination of such forward inner and outer Taylor-model based approximations can be used as a basis for the verification and falsification of properties of cyber-physical systems.

Radu Titiu (AriC and BitDefender)

November 22, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Adaptively secure PRFs from LWE

In distributed pseudorandom functions (DPRFs), a PRF secret key SK is secret shared among N servers so that each server can locally compute a partial evaluation of the PRF on some input X. A combiner that collects t partial evaluations can then reconstruct the evaluation F (SK, X) of the PRF under the initial secret key. So far, all non-interactive constructions in the standard model are based on lattice assumptions. One caveat is that they are only known to be secure in the static corruption setting, where the adversary chooses the servers to corrupt at the very beginning of the game, before any evaluation query. In this work, we construct the first fully non-interactive adaptively secure DPRF in the standard model. Our construction is proved secure under the LWE assumption against adversaries that may adaptively decide which servers they want to corrupt. We also extend our construction in order to achieve robustness against malicious adversaries.

This is joint work with Benoit Libert and Damien Stehlé.

Martin Kumm (Uni. Kassel, Germany)

November 8, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Exact Computation of Monotonic Functions with Large Input Word Sizes using Look-Up Tables

The exact evaluation of arbitrary functions by using look-up tables (LUTs) is typically limited to small input word sizes. This is due to the fact that the storage requirements grow exponentially with the input word size \(N\) and linear with the output word size \(M\), i.e., \(O(2^N M)\). However, many applications require the computation of elementary functions with a large precision of the input argument but a lower precision of the result. One example is direct digital frequency synthesis (DDFS) with typically \(N=32..48\) bit and \(M=8..12\) bit. Another example are tone mapping methods for high-dynamic range (HDR) imaging with typ. \(N=16..19\) bit and \(M=8\) bit. In this talk, alternative architectures for evaluation of monotonic functions using LUTs are discussed which memory requirements scale linear with the input word size and exponentially with the output word size, i.e., \(O(2^M N)\). This is achieved by using \(M\) additional comparators or adders. First experimental results from FPGA synthesis show that this also translates to resource reductions for those applications where \(M\) is just larger than \(N\).

Silviu Filip (Inria)

October 25, 2018 at 10h15, room M7-315 (3rd floor, Monod)

A High Throughput Polynomial and Rational Function Approximations Evaluator

We present an automatic method for the evaluation of functions via polynomial or rational approximations and its hardware implementation, on FPGAs. These approximations are evaluated using Ercegovac’s iterative E-method adapted for FPGA implementation. The polynomial and rational function coefficients are optimized so that they satisfy the constraints of the E-method. We present several examples of practical interest; in each case a resource-efficient approximation is proposed and comparisons are made with alternative approaches.

Florent Bréhard (AriC)

October 18, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Rigorous Numerics for Function Space Problems and Applications to Aerospace

A wide range of numerical routines exist for solving function space problems (like ODEs, PDEs, optimization, etc.). But in most cases, one lacks information about the reliability of the results, e.g., how many of the returned digits are correct. While most applications focus on efficiency, some safety-critical tasks, as well as computer assisted mathematics, need rigorous mathematical statements about the computed result such as automatic tight error bounds.

A relevant practical example occurs in the spacecraft rendezvous problem, which consists in determining the optimal command law for a spacecraft equipped with thrusters to be transferred from its original orbit to a target orbit within a given time interval. Computing rigorous trajectories is of high interest to guarantee a posteriori the correctness of the numerical command law returned by the optimization algorithm used to solve this problem.

In this talk we discuss a rigorous framework called Chebyshev models to provide validated enclosures of real-valued functions defined over a compact interval. After having presented the basic arithmetic operations on them, we focus on an algorithm that computes validated solutions of linear ordinary differential equations, specifically, approximate truncated Chebyshev series together with a rigorous uniform error bound. The method relies on an a posteriori validation based on a Newton-like fixed-point operator, which also exploits the almost-banded structure of the problem in an efficient way. We provide an open-source C implementation (

Finally, certified enclosures of spacecraft trajectories arising in the rendezvous problem will be computed using the tools introduced during the talk.


Nicolas Louvet (AriC)

July 12, 2018 at 10h15, room M7-315 (3rd floor, Monod)

A compensated dot product for vectors of floating-point expansions

Some computations require more precision than the 53 bits provided by the IEEE 754 double precision arithmetic (binary64), widely available in hardware. When an arbitrarily large precision is required, multi-precision floating point libraries, such as MPFR, are particularly useful. However, MPFR is implemented using only integer arithmetic, which is not as much optimized as floating point arithmetic on modern processors. If only a few hundreds bits are needed, faster alternatives using only floating point arithmetic exist, such as the well known qd library developed by Bailey and his co-authors. This library provides both double-double and quad-double arithmetics that roughly simulate twice (106 bits) and four times (212 bits) the IEEE 754 double precision, respectively. Double-double and quad-double numbers are a special case of floating point expansions, which consists in representing a rational number as an unevaluated sum of floating point numbers. Another alternative is the use of error-free transformations and compensation techniques to simulate an increased working precision, and we used this approach in this work.

In this talk, we focus on the computation of dot products between vectors of floating point expansions. We present alternatives to the algorithms obtained using directly double-double and quad-double arithmetics from the qd library: our aim is to trade off speed against a moderate loss of accuracy. Of course, we always keep the loss of accuracy under control by providing forward error bounds for our algorithms. The algorithm we present provides a speedup of 2 compared to the qd library, while maintaining roughly the same accuracy. This speedup is obtained using only basic programming techniques, and even higher speedups can be obtained by using vector instruction sets (AVX2 or AVX512 in particular) available on modern computers.

Nicolas Brisebarre (AriC)

June 14, 2018 at 10h15, room M7-315 (3rd floor, Monod)

New results on the table maker’s dilemma for transcendental functions

On a computer, real numbers are usually represented by a finite set of numbers called floating-point numbers. When one performs an operation on these numbers, such as an evaluation by a function, one returns a floating-point number, hopefully close to the mathematical result of the operation. Ideally, the returned result should be the exact rounding of this mathematical value. If we’re only allowed a unique and fast evaluation (a constraint often met in practice), one knows how to guarantee such a quality of results for arithmetical operations like \(+, -, x, /\) and square root but, as of today, it is still an issue when it comes to evaluate an elementary function such as cos, exp, cube root for instance. This problem, called Table Maker’s Dilemma, is actually a diophantine approximation problem. It was tackled, over the last fifteen years, by V. Lefèvre, J.M. Muller, D. Stehlé, A. Tisserand and P. Zimmermann (LIP, ÉNS Lyon and LORIA, Nancy), using tools from algorithmic number theory. Their work made it possible to partially solve this question but it remains an open problem. In this talk, I will present an ongoing joint work with Guillaume Hanrot (ÉNS Lyon, LIP, AriC) that improve on a part of the existing results.

Alice Pellet-Mary (AriC)

June 7, 2018 at 10h30, room M7-315 (3rd floor, Monod)

Approx-SVP in Ideal Lattices with Pre-processing

Finding a short non zero vector in an Euclidean lattice is a well-studied problem which has proven useful to construct many cryptographic primitives. The current best asymptotic algorithm to find a relatively short vector in an arbitrary lattice is the BKZ algorithm. This algorithm recovers a vector which is at most \(2^{n^{\alpha}}\) times larger than the shortest non zero vector in time \(2^{n^{1-\alpha}}\) for any \(\alpha\) between 0 and 1.

In order to gain in efficiency, it is sometimes interesting to use structured lattices instead of general lattices. An example of such structured lattices are principal ideal lattices. One may then wonder whether, on the security front, it is easier to find short vectors in a structured lattice or not. Until 2016, there was no known algorithm which would perform better on principal ideal lattices than the BKZ algorithm (either classically or quantumly). In 2016, Cramer et al. proposed a quantum algorithm that finds a \(2^{\sqrt n}\) approximation of the shortest non zero vector in polynomial time. However, the BKZ algorithm remained the best algorithm in the classical setting or for approximation factor smaller than \(2^{\sqrt n}\) in the quantum setting.

In this talk, I will present an algorithm that generalizes the one of Cramer et al. and improves upon the BKZ algorithm for principal ideal lattices, both quantumly and classically. This algorithm is heuristic and non uniform (or equivalently it needs an exponential pre-processing time).

Laurent Grémy (AriC)

May 24, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Assessing the security of discrete-logarithm-based cryptography

The security of most public-key cryptography relies on the difficulty of mathematical problems, among which the discrete logarithm problem in finite fields. The number field sieve (NFS) is the most efficient known algorithm for this problem for a large range of finite fields. Since its introduction in the 90’s, NFS has evolved, both theoretically and practically. These improvements force cryptographers to reassess key-sizes of cryptosystems.

In this talk, we will first introduce the key notions of the index-calculus algorithms, the family of algorithms to which NFS belongs. Then, we will describe recent progress related to NFS. Finally, we will conclude by giving ideas of how pairing-based cryptography is assessed, based on the works of Menezes—Sarkar—Singh and Barbulescu—Duquesne.

Dan Roche (US Naval Academy and LJK, Grenoble)

May 17, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Matrix multiplication and inverse with error correction

In some situations the result of a linear algebra computation may be known to be almost correct – that is, correct except in a few positions of the output matrix. One example is in distributed computing, where some nodes in a cluster may return incorrect results; another is in integer computations where only a few entries may be large enough to require an additional prime’s worth of Chinese remaindering. We present new algorithms to use the almost-correct result of a matrix product or inverse to recover the completely-correct result, whose running time is softly-linear in the size of the matrix when the number of errors is sufficiently small. Our algorithms build on the prior work by Gasieniec et al (Algorithmica 2017) for error-correcting matrix multiplication, and we are now able to effectively handle sparsity in the inputs and outputs, as well as incorporate fast matrix multiplication so that the cost scales well even with a large number of errors. The key underlying tools come from sparse polynomial interpolation and sparse low-rank linear algebra.

Eva Darulova (MPI, Saarbrücken)

May 3, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Daisy – a framework for sound accuracy analysis and optimization of numerical programs

Floating-point or fixed-point computations are an integral part of many embedded and scientific computing applications, as are the roundoff errors they introduce. They expose an interesting tradeoff between efficiency and accuracy: the more precision we choose, the closer the results will be to the ideal real arithmetic, but the more costly the computation becomes. Unfortunately, the unintuitive and complex nature of finite-precision arithmetic makes manual optimization infeasible such that automated tool support is indispensable.

This talk presents an overview of Daisy, a framework for sound accuracy analysis and optimization of finite-precision programs. We will provide a high-level view of its main features: roundoff error analysis as well as rewriting and mixed-precision optimization.

AriC members

April 26, 2018 at 10h45, room M7-315 (3rd floor, Monod)

Back from conferences

Short talks (10’ + 5’ for questions) when people are back from a conference, on talks they attended and found interesting. We will have, by alphabetical order :

  • Elena Kirshanova on «A practical cryptanalysis of Walnat» by Daniel Hart, Do Hoon Kim, Giacomo Micheli, Guillermo Pascual Perez, Christophe Petit, Yuxuan Quek; presented at PKC18;
  • Fabrice Mouhartem on «Post-Quantum Security of Fiat-Shamir» by Dominique Unruh; presented at Asiacrypt 2017;
  • Bruno Salvy on «Arctic curves in path models from the tangent method» by Philippe Di Francesco and Matthew F. Lapa; presented at a Workshop on “Computer Algebra in Combinatorics”, Vienna 2017;
  • Wen Weiqiang on «Fully Homomorphic Encryption from the Finite Field Isomorphism Problem» by Yarkın Doröz, Jeffrey Hoffstein, Jill Pipher, Joseph H. Silverman, Berk Sunar, William Whyte, Zhenfei Zhang; presented at PKC’18.

Bruno Salvy (AriC)

March 29, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Algorithmic tools for the asymptotics of linear recurrences

The asymptotic behaviour of a solution to a linear recurrence with polynomial coefficients can be decomposed into a not-so-difficult part that depends on the recurrence but not on the initial conditions and a more difficult part depending on the initial conditions. For the first one, several algorithms exist. In general, the second part can be dealt with numerically and in a certified way rather efficiently. Diagonals of rational functions form an important class of sequences solutions of such recurrences for which more is possible. This class generalizes that of sequences of coefficients of algebraic power series, which it contains. It enjoys very special arithmetic and analytic properties. Its elements appear frequently in combinatorics, notably as multiple binomial sums, as well as in number theory, e.g., Apéry’s sequences in his proof of the irrationality of \(\zeta(3)\). The asymptotic analysis of diagonals, in simple but frequent cases, can be performed automatically using the technology of Analytic Combinatorics in Several Variables of Pemantle and Wilson, together with modern tools of computer algebra. The talk will give an overview of the context and of recent work in this area.

Fredrik Johansson (LFANT, Bordeaux)

March 22, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Numerical integration in arbitrary-precision ball arithmetic

We discuss arbitrary-precision numerical integration with rigorous error bounds in the framework of ball arithmetic. A new implementation has recently been added to the Arb library. Rapid convergence is ensured for piecewise complex analytic integrals by use of the Petras algorithm, which combines adaptive bisection with degree-adaptive Gaussian quadrature where error bounds are determined via complex magnitudes without evaluating derivatives. The code is general, easy to use, and efficient, often outperforming existing non-rigorous software. We discuss the pros and cons of this approach and consider some applications to special functions and computational number theory.

Claude-Pierre Jeannerod (AriC)

March 8, 2018 at 13h30, room 115, Monod site

Algorithms for Matrices with Displacement Structure

The notion of displacement rank provides a uniform way to deal with various classical matrix structures (such as Toeplitz, Hankel, Vandermonde, Cauchy) and allows for various generalizations. In this talk, we will review some recent progress in the design of asymptotically fast algorithms for multiplying and inverting such matrices. In particular, for \(n\) by \(n\) matrices of displacement rank \(\alpha\), we will see that the arithmetic cost of such operations can be reduced from about \(\alpha^2 n\) (up to log factors) down to \(\alpha^{\omega-1} n\), where \(\omega < 2.373\) is the exponent of (dense, unstructured) matrix multiplication.

(Based on joint work with A. Bostan, C. Mouilleron, and E. Schost.)

Warwick Tucker (Uppsala University and AriC, LIP)

March 1st, 2018 at 10h15, room 116, Monod site

Validated numerics – part III

In this talk, we will continue our exposition of validated numerics, and focus on solving systems of nonlinear equations. We will state and prove the properties of the set-valued Newton operator, and show a nice variant of it (the Krawczyk operator). We will also talk about weaker methods based on the implicit function theorem that work in harder situations such as when a bifurcation of the solutions takes place. Again – the talk will be based upon concrete examples, and some small computer demo will illustrate the presented methods.

Gilles Villard (AriC)

February 22th, 2018 at 10h15, room 116, Monod site

Computation of the resultant of two generic bivariate polynomials over a field

An algorithm is presented for computing the resultant of two generic bivariate polynomials over a field \(\mathbb{K}\).

For such \(p\) and \(q\) in \(\mathbb{K}[x,y]\) of degree \(d\) in \(x\) and \(n\) in \(y\), the algorithm computes the resultant with respect to \(y\) using \((n^{2 – 1/\omega} d) ^{1+o(1)}\) arithmetic operations in \(\mathbb{K}\), where two \(n\times n\) matrices are multiplied using \(O(n^{\omega})\) operations. Previous algorithms from the early 1970’s required time \((n^{2} d) ^{1+o(1)}\).

We also discuss some extensions of the approach to the computation of characteristic polynomials of generic structured matrices and in univariate quotient algebras.

Nick Trefethen (AriC and University of Oxford)

February 15th, 2018 at 10h15, room 116, Monod site

Minimax approximation of functions

My interest in minimax approimation started with my undergraduate thesis with Garrett Birkhoff at Harvard in 1977 and continues strong with the exciting new algorithm developed with Filip, Nakatsukasa, and Beckermann. This talk will collect observations about many aspects of this problem — both real and complex, polynomial and rational.

Koen de Boer (CWI, Amsterdam)

February 8th, 2018 at 10h15, room 116, Monod site

Attacks on the AJPS Mersenne-based Cryptosystem

Aggarwal, Joux, Prakash and Santha recently introduced a new potentially quantum-safe public-key cryptosystem, and suggested that a brute-force attack is essentially optimal against it. They consider but then dismiss both Meet-in-the-Middle attacks and LLL-based attacks. Very soon after their paper appeared, Beunardeau et al. proposed a practical LLL-based technique that seemed to significantly reduce the security of the AJPS system.

We do two things. First, we show that a Meet-in-the-Middle attack can also be made to work against the AJPS system, using locality-sensitive hashing to overcome the difficulty that Aggarwal et al. saw for such attacks. We also present a quantum version of this attack. Second, we give a more precise analysis of the attack of Beunardeau et al., confirming and refining their results.

Jean-Michel Muller (AriC team)

February 1st, 2018 at 10h15, room 116, Monod site

On the relative error of computing complex square roots in floating-point arithmetic

(joint work with Claude-Pierre Jeannerod)

We study the accuracy of a classical approach to computing complex square-roots in floating-point arithmetic.
Our analyses are done in binary floating-point arithmetic in precision \(p\), and we assume that the (real) arithmetic operations \(+\), \(-\), \(\times\), \(\div\), \(\sqrt{ }\) are rounded to nearest, so the unit roundoff is \(u = 2^{-p}\).
We show that in the absence of underflow and overflow, the componentwise and normwise relative errors of this approach are at most \(\frac{7}{2}u\) and \(\frac{\sqrt{37}}{2}u\), respectively, and this without having to neglect terms of higher order in \(u\). We then provide some input examples showing that these bounds are reasonably sharp for the three basic binary interchange formats (binary32, binary64, and binary128) of the IEEE 754 standard for floating-point arithmetic. Finally, we consider another, slightly more accurate algorithm.

Tanguy Rivoal (Institut Fourier, CNRS and Université Grenoble Alpes)

December 21, 2017 at 10h15, room 116, Monod site

Exceptional algebraic values of E-functions

E-functions are power series with algebraic Taylor coefficients at the origin (satisfying certain growth conditions) and solutions of linear differential equations with polynomial coefficients. Siegel defined them in 1929 to generalize the diophantine properties of the exponential function, which takes transcendental values at any non-zero algebraic point. The situation is more complicated in general, as an E-function may sometimes return an algebraic value when evaluated at a non-zero algebraic point. In this talk, I will first explain the classical Diophantine results for E-functions. I will then present an algorithm which, given an E-function \(F(z)\) as input, outputs the finite list of algebraic numbers \(A\) such that \(F(A)\) is algebraic.

This is a joint work with Boris Adamczewski (CNRS et Université Lyon 1).

Alexandre Wallet (AriC, LIP)

December 14, 2017 at 10h15, room 116, Monod site

On variants of Ring-LWE and Polynomial-LWE

Learning With Errors is a popular security assumption for post-quantum cryptographic schemes: informally, an integer solution of a “noisy” linear system must be found. It is based on hard problems on Euclidean lattices, and it allows for a lot of functionalities in cryptography. Structured variants based on algebraic number theory have been introduced to improve time and memory efficiency. However, the addition of new mathematical objects such as number fields/rings has led to several possible security assumptions, and the link between them is not always clear.

I will present recent results toward the clarification of the situation. For a large familyof number fields, we showed that all versions of the problem are equivalent (up to polynomial increases in the noise, and up to non-uniformity of some of the underlying reductions). Our reductions rely on mathematical tools, some of them being new in the Ring-based LWE context, such as perturbation techniques for polynomials.

Based on a joint work with Miruna Rosca and Damien Stehlé.

LIP day on “Calcul”

Novembre 30, 2017

See the web page dedicated to this day.

Warwick Tucker (Uppsala University and AriC, LIP)

November 23, 2017 at 10h15, room 116, Monod site

Validated numerics – part II

In this talk, we will cover the fundamentals of validated numerics, namely the inclusion principle and its consequences for mathematical proofs. In particular, we will talk about the set-valued Newton operator, and how it comes into play in many computer-assisted proofs. Again – the talk will be based upon concrete examples, and some small computer demo will illustrate the presented methods.

Nick Trefethen (AriC and University of Oxford)

Thursday November 16, 2017 at 10h15, room 116, Monod site

Numerical Computation with Rational Functions

I will begin with a review of rational functions in numerical computation, including a review of twelve applications where they are useful and of two famous problems of rational approximation theory. Then we find ourselves with an “inverse Yogiism”. Although it is mathematically correct that a rational function is a quotient \(r = p/q\) of two polynomials, representations of this form are numerically unstable in precisely the cases where rational functions should be most powerful. A better alternative is a barycentric representation \(r = n/d\) in which \(n\) and \(d\) are partial fraction sums, and this leads to the exciting new “AAA algorithm”, which we will explain and demonstrate. This is joint work with Silviu Filip, Yuji Nakatsukasa, and Olvier Séte.

Paola Boito, Elena Kirshanova, Benoît Libert, and Nathalie Revol (AriC, LIP)

Thursday November 9, 2017 at 10h15, room 116, Monod site

Back from Conferences

Short talks (10’ + 5’ for questions) when people are back from a conference, on talks they attended and found interesting. We will have, by alphabetical order :

  • Paola Boito on «Solving large scale Lyapunov and Sylvester equations with quasiseparable coefficients» by Stefano Massei at Structured Matrices in Numerical Linear Algebra;Abstract: Let us consider a Sylvester matrix equation \(AX+XB=C\) where the coefficients \(A\), \(B\) and \(C\) are matrices with low-rank off-diagonal blocks. What can be said about the rank structure of the solution \(X\)? It turns out that, under suitable hypotheses, \(X\) inherits a rank structure as well. The main idea of the proof relies on a recent result by Beckermann and Townsend on displacement structured matrices.
  • Elena Kirshanova on «Grover Meets Simon – Quantumly Attacking the FX-construction» by Gregor Leander and Alexander May,to be presented at AsiaCrypt’17;
  • Benoît Libert on «Asymptotically Compact Adaptively Secure Lattice IBEs and Verifiable Random Functions via Generalized Partitioning Techniques» by Shota Yamada at IACR-CRYPTO-2017;
  • Nathalie Revol on «Utah Floating-Point Toolset» by Zvonimir Rakamaric at Dagstuhl Seminar 17352 – Analysis and Synthesis of Floating-point Programs.

Rencontres “Arithmétique de l’Informatique Mathématique”

October 24-26, 2017

Monthly Lattice Meeting

October 19-20, 2017 in room 115, Monod site

Nick Trefethen (University of Oxford and AriC, LIP)

Thursday October 19, 2017 at 10h15, room 116, Monod site

Chebyshev Series

Chebyshev series are the basic tool for working with functions on a real interval — the equivalent of Fourier series but without the assumption of periodicity. They are also the basis of the software system Chebfun. We review the properties of Chebyshev series, using Chebfun to illustrate them, and then discuss the surprisingly challenging problem of “chopping a Chebyshev series” to obtain a representation to a prescribed precision, which is an analogue for functions of the floating-point rounding of real numbers.
We close with a few words about Faber’s theorem of 1914 and “inverse Yogiisms”.

Warwick Tucker (Uppsala University and AriC, LIP)

Thursday October 12, 2017 at 10h15, room 116, Monod site

Validated numerics – a brief overview

In this talk, I will give a short introduction to the field of validated numerics. We will take a top-down approach, and begin by studying some natural mathematical problems that arise in the study of dynamical systems. Such problems invariably lead to some numerical challenges, and we will see how they can be addressed in the framework of validated numerics. Everything will be explained from “first principles”, so no prior knowledge of dynamical systems is needed.

Vincent Lefèvre, Jean-Michel Muller, Bruno Salvy and Damien Stehlé (AriC, LIP)

Thursday September 28, 2017 at 10h15, room 116, Monod site

Back from Conferences

Short talks (10’ + 5’ for questions) when people are back from a conference, on talks they attended and found interesting. We will have, by alphabetical order :

  • Vincent Lefèvre on « Multiple-precision floating-point arithmetic on SIMD processors » by van der Hoeven at Arith;
  • Jean-Michel Muller on « The rise of multi precision computations » by Higham at Arith;
  • Bruno Salvy on « A universal ordinary differential equation » by Bournez & Pouly at FoCM;
  • Damien Stehlé on « Gaussian Sampling over the Integers: Efficient, Generic, Constant-Time » by Micciancio & Walter at Crypto.


Thomas Prest (Thales)

Thursday May 11, 2017 at 9h45 (please note the unusual time), room 116, Monod site

Easier floating-point on a Rényi day

This talk is part of the Monthly Lattice Meetings.

Paola Boito (XLIM, U. Limoges and AriC)

Thursday April 20, 2017 at 10h15, room 116, Monod site

Efficient Solution of Parameter Dependent Quasiseparable Systems
and Computation of Meromorphic Matrix Functions

In this work we focus on the solution of shifted quasiseparable systems and of more general parameter dependent matrix equations with quasiseparable representations. We propose an efficient algorithm exploiting the invariance of the quasiseparable structure under diagonal shifting and inversion. This algorithm is applied to compute various functions of matrices. Numerical experiments show the effectiveness of the approach.
This is joint work with Luca Gemignani (Pisa) and Yuli Eidelman (Tel Aviv).

Alin Bostan (Inria Saclay)

Thursday April 13, 2017 at 10h15, room 116, Monod site

Proving algorithmically the transcendence of D-finite power series

Given a sequence represented by a linear recurrence with polynomial coefficients and sufficiently many initial terms, a natural question is whether the transcendence of its generating function can be decided algorithmically. The question is non trivial even for sequences satisfying a recurrence of first order. An algorithm due to Michael Singer is sufficient, in principle, to answer the general case. However, this algorithm suffers from too high a complexity to be effective in practice. We will present a recent method that we have used to treat a non-trivial combinatorial example. It reduces the question of transcendence to a (structured) linear algebra problem.

Laurent Grémy (Loria)

Monday April 10, 2017 at 15h00, room 115, Monod site

Computing discrete logarithms: sieving for the number field sieve

Factoring large integers and computing discrete logarithms over finite fields are two difficult problems, efficiently solved by a family of algorithms, the index calculus. The security of some cryptosystems rely on this difficulty.

Since 1982 and the use of the quadratic sieve by Pomerance to factor large integers, it is well known that an important step of these algorithms, called relation collection, can be done quickly using sieving algorithms, basically like the sieve of Eratosthenes to find primes. Among the index calculus algorithms, the number field sieve (NFS) reaches the best complexity. It requires a two-dimensional sieving. In 2005, Franke and Kleinjung described the most efficient algorithm to sieve over two dimensions.

The diversity of finite fields implies that different variants of the NFS, as the one described by Joux–Lercier–Smart–Vercauteren in 2006 or by Kim–Barbulescu in 2016, need to sieve in higher dimensions, basically from 3 to 6. In this talk, we will describe three sieving algorithms in dimension three, some computational results of their implementations and finally an algorithm to sieve in any small dimension.

Lattice meeting

Wednesday April 5 and Thursday April 6, 2017

See the Lattice meetings page.

Jean-Michel Muller (AriC)

Thursday March 23, 2017 at 10h15, room 116, Monod site

Tight and rigorous error bounds for basic building blocks of double-word arithmetic

(Joint work with Mioara Joldes and Valentina Popescu)

We analyze several classical basic building blocks of double-word arithmetic (frequently called “double-double arithmetic” in the literature): the addition of a double-word number and a floating-point number, the addition of two double-word numbers, the multiplication of a double-word number by a floating-point number, the multiplication of two double-word numbers, the division of a double-word number by a floating-point number, and the division of two double-word numbers. For multiplication and division we get better relative error bounds than the ones previously published. For addition of two double-word numbers, we show that the previously published bound was incorrect, and we provide a new relative error bound. We introduce new algorithms for division. We also give examples that illustrate the tightness of our bounds.

George Labahn (U. Waterloo, Canada)

Friday February 3, 2017 at 10h15, room 116, Monod site

Sigma Bases, Kernel Bases and their use in fast computation for polynomial matrices

In this talk we discuss sigma bases and kernel bases of polynomial matrices and illustrate their role in fast algorithms for arithmetic with polynomial matrices. In particular we focus on the use of kernel bases for triangulation, computing determinants and Hermite normal form computation.

Joint work with Vincent Neiger (Lyon and TU Denmark) and Wei Zhou (Waterloo)

Marc Mezzarobba (CNRS, UPMC)

Thursday January 26, 2017 at 10h15, room 116, Monod site

Rigorous Multiple-Precision Evaluation of D-Finite Functions in SageMath

We present an implementation, on top of the SageMath computer algebra system, of a collection of tools for computing numerical values of D-finite functions. Recall that a complex analytic function is D-finite when it satisfies an ordinary differential equation whose coefficients are polynomial in the independent variable. D-finite functions can be viewed as a class of special functions analogous to those of algebraic or hypergeometric functions, but more general. They come up in areas such as analytic combinatorics and mathematical physics, and lend themselves well to symbolic manipulation by computer algebra systems.

The main task our package performs is the “numerical analytic continuation” of D-finite functions. This process results in a numerical approximation of the transition matrix that maps “initial values” of an ODE somewhere on the complex plane to “initial values” elsewhere that define the same solution. Numerical analytic continuation can then be used to compute things like values or polynomial approximations of D-finite functions anywhere on their Riemann surfaces, and monodromy matrices of differential operators. The code supports the important limit case where the (generalized) initial values are provided at regular singular points of the ODE, making it possible in particular to compute connection constants between regular singularities. It is rigorous in the sense that it returns interval results that are guaranteed to enclose the exact mathematical result.

The new package can be considered a successor of NumGfun, a Maple package by the same author with broadly similar features.

Brice Minaud (RHUL)

Thursday January 19, 2017 at 10h15, room 116, Monod site

Cryptanalysis of the CLT15 Multilinear Map over the Integers

Multilinear maps are a powerful cryptographic primitive: a wide array of cryptographic schemes can be built from multilinear maps, including some for which there is no other known construction, such as non-interactive multipartite key exchange, or general program obfuscation. However to this day there is no provable construction of multilinear maps. Moreover both of the original constructions (proposed in 2013 by Garg, Gentry, and Halevi, and by Coron, Lepoint, and Tibouchi) have been broken for their most direct application, namely non-interactive multipartite key exchange. This has led Coron, Lepoint and Tibouchi to introduce an improved variant of their original construction at Crypto 2015. We propose a polynomial attack on this new construction, which fully breaks the scheme. The presentation will of course begin with a general introduction to multilinear maps and the CLT construction, which aims to be accessible to all; and will conclude with a brief overview of the current state of the topic.

Anastasia Volkova (UPMC Paris 6, LIP6)

Thursday January 5, 2017 at 10h15, room 116, Monod site

Towards reliable implementation of digital filters

Linear digital filters are frequently used in digital signal processing. We are interested in implementation of such algorithms in Fixed-Point Arithmetic. However, on each step of an implementation various issues arise. We propose an automatised filter-to-code generator which encompasses the filter implementation flow and provides a consistent error-analysis of involved algorithms.
In this talk we make an overview of the filter-to-code generator and focus on determining the Fixed-Point formats for an implementation that is optimal with respect to a certain criteria. We consider Linear Time Invariant filters in state-space representation. The computational errors in the intermediate steps of the filter evaluation as well as their accumulation over time are fully taken into account. Our approach is fully rigorous in the way that the output Fixed-Point formats are shown to be free of overflows and that we do not use any (non-exhaustive) filter simulation steps but proofs.

Crypto and Lattices Days

Thursday December 15, 2016 at 10h15, room 116, Monod site


Cf. the Web page of the Crypto and Lattices Days

Crypto and Lattices Days

Thursday November 24, 2016 at 10h15, room 116, Monod site


Cf. the Web page of the Crypto and Lattices Days

Paola Boito (U. Limoges and LIP, AriC team)

Thursday October 6, 2016 at 10h15, room 116, Monod site

Quasiseparable structure and polynomial rootfinding

Computing the roots of a univariate polynomial is a classical task in computer algebra and numerical analysis and it is often solved via a companion matrix approach. But can we exploit the matrix structures that arise in this problem to devise fast eigensolvers and rootfinding algorithms, without losing accuracy?

Motivated by this problem, we

  • review some useful rank structures of Frobenius and Chebyshev companion matrices and pencils,
  • introduce the quasiseparable representation for such matrices and pencils,
  • show how to adapt suitable versions of the Francis/Kublanovskaya QR algorithm to the quasi separable representation, so that the arithmetic complexity of each iteration is linear in the size of the matrix,
  • comment on numerical results, applications, and perspectives for further work.

This is joint work with Luca Gemignani (University of Pisa) and Yuli Eidelman (University of Tel Aviv).

Serge Torres (LIP, AriC team)

Thursday September 22, 2016 2:30 PM, in lecture theater B, at the 3d floor of the GN1 building, Monod Campus.

PhD defense: Tools for the Design of Reliable and Efficient Function Evaluation Libraries

Can you tell me more about that?

The design of function evaluation libraries is a complex task that requires great care and dedication, especially when one wants to satisfy high standards of reliability and performance. In practice, it cannot be correctly performed, as a routine operation, without tools that not only help the designer to find his way in a complex and extended solution space but also guarantee that his solutions are correct and (possibly) optimal.

As of the present state of the art, one has to think in terms of “toolbox” from which he has to smartly mix-and-match the utensils that better fit his goals rather than expect to have at hand a solve-all automatic device.

What is it that you have done?

The work presented here is dedicated to the design and implementation of such specialized tools in two realms:

  • the radical consolidation of Ziv’s rounding test that is used, in a more or less empirical way for ages, in the implementation of functions approximation;
  • the development of an implementation of the SLZ-algorithm in order to “solve” (in a specific sense that will be made clear) the Table Maker Dilemma; the suitability of this implementation is also challenged for the development of correctly rounded approximations of functions with quad-precision floating-point (aka IEEE-754 Binary128 format) arguments and images.

About the defense, see the report on the intranet of ENS de Lyon.

Rémi Imbach (Inria Nancy – Grand Est, VEGAS Team)

Thursday September 15, 2016 at 10h15, room 116 (site Monod M1)

Certified numerical tools for computing the topology of projected curves

We are interested in computing the topology of the projection of an algebraic or analytic space curve in the plane. Such a projection is not a smooth curve and has singularities. State of the art approaches to compute the topology of algebraic plane curves use symbolic calculus but in the case of a projection, latter approaches suffer from the size of the implicit representation of the curve as a resultant polynomial.
Using numerical tools to compute the projected curve or its singularities is a challenging problem since the projected curve is not smooth, and state-of-the-art characterizations of the singularities use over-determined systems of equations. We will first propose a new characterization of the singularities of the projection of an algebraic curve using a square system polynomials; its solutions are regular and it can be solved numerically.
However its equations are coefficients of resultant polynomials and are still very large polynomials. The demand in arithmetic precision to carry out numerical calculus with such polynomials makes classical solvers either incomplete or dramatically time consuming. We will propose a multi-precision solver using interval subdivision specially designed to handle large polynomials to compute the solutions of this system. We will then consider the more general case of projections of analytic space curves, and propose a geometric approach to describe its singularities. It results in a square system of equations with only regular solutions, that do not involve resultant theory, and that can be solved with our certified numerical solver. Finally we will present a new approach to compute the topology of the projected curve, i.e. find a graph that has the same topology. We use a certified numerical path tracker to enclose the space curve in a sequence of boxes, allowing both to simplify the research of singularities and to compute smooth branches linking singularities.

Archives of the seminar

For seminars in previous years, see former AriC seminars.

AriC project – Arithmetic and Computing