Seminar

The calendar of these seminars can be subscribed to by clicking here.

Next seminars

Guillaume Melquiond (Toccata, INRIA Paris Saclay)

June 17, 2021 at 10h15, online seminar

Plotting in a formally verified way

An invaluable feature of computer algebra systems is their ability to plot the graph of functions. Unfortunately, when one is trying to design a library of mathematical functions, this feature often falls short, producing incorrect and potentially misleading plots, due to accuracy issues inherent to this use case. This work investigates what it means for a plot to be correct and how to formally verify this property. The Coq proof assistant has thus been turned into a tool for plotting function graphs that are guaranteed to be correct, by using reliable polynomial approximations. This feature is provided as part of the CoqInterval library.

Shweta Agrawal (C.S.E. department, I.I.T. Madras, India)

June 24, 2021 at 10h15, online seminar

Deniable Fully Homomorphic Encryption from LWE

We define and construct Deniable Fully Homomorphic Encryption based on the Learning With Errors (LWE) polynomial hardness assumption. Deniable FHE enables storing encrypted data in the cloud to be processed securely without decryption, maintaining deniability of the encrypted data, as well the prevention of vote-buying in electronic voting schemes where encrypted votes can be tallied without decryption.
Our constructions achieve compactness independently of the level of deniability- both the size of the public key and the size of the ciphertexts are bounded by a fixed polynomial, independent of the faking probability achieved by the scheme. This is in contrast to all previous constructions of deniable encryption schemes (even without requiring homomorphisms) which are based on polynomial hardness assumptions, originating with the seminal work of Canetti, Dwork, Naor and Ostrovsky (CRYPTO 1997) in which the ciphertext size grows with the inverse of the faking probability. Canetti et al. argued that this dependence “seems inherent”, but our constructions illustrate this is not the case. The running time of our encryption algorithm depends on the inverse of the faking probability, thus the scheme falls short of achieving simultaneously compactness, negligible deniability and polynomial encryption time. Yet, we believe that achieving compactness is a fundamental step on the way to achieving all properties simultaneously as has been the historical journey for other primitives such as functional encryption.
Interestingly, we note that our constructions support large message spaces, whereas previous constructions were bit by bit, and can be run in online-offline model of encryption, where the bulk of computation is independent of the message and may be performed in an offline pre-processing phase. The running time of the online phase, is independent of the faking probability, whereas the offline encryption run-time grows with the inverse of the faking probability. At the heart of our constructions is a new way to use bootstrapping to obliviously generate FHE ciphertexts so that it supports faking under coercion.
Joint work with Saleet Mossel and Shafi Goldwasser

Sheehan Olver (Department of Mathematics, Imperial College London, United Kingdom)

July 1, 2021 at 10h15, online seminar

 

 

Richard Peng (Georgia Institute of Technology)

July 8, 2021 at 14h00, online seminar

 

 

2020-2021

Joel Dahne (Uppsala University, Sweden)

May 20, 2021 at 10h15, online seminar

A computer assisted counterexample to Payne’s nodal line conjecture with few holes

Payne conjectured in 1967 that the nodal line of the second Dirichlet eigenfunction must touch the boundary of the domain. In their 1997 breakthrough paper, Hoffmann-Ostenhof, Hoffmann-Ostenhof and Nadirashvili proved this to be false by constructing a counterexample in the plane with an unspecified, but large, number of holes and raised the question of the minimum number of holes a counterexample can have. In this talk I will present a computer assisted counter example with 6 holes.
It’s joint work with both Javier Gómez-Serrano and Kimberly Hou.

Martin Albrecht (Information Security Group at Royal Holloway, University of London)

May 6, 2021 at 10h15, online seminar

On Bounded Distance Decoding with Predicate: Breaking the “Lattice Barrier” for the Hidden Number Problem

Lattice-based algorithms in cryptanalysis often search for a target vector satisfying integer linear constraints as a shortest or closest vector in some lattice. In this talk, I will discuss that these formulations may discard non-linear information from the underlying application that can be used to distinguish the target vector even when it is far from being uniquely close or short.
I will then discuss how we can formalise lattice problems augmented with a predicate distinguishing a target vector and give algorithms for solving instances of these problems. I will also describe the application of our techniques to lattice-based approaches for solving the Hidden Number Problem, a popular technique for recovering secret DSA or ECDSA keys in side-channel attacks, and discuss how our algorithms succeed in recovering the signing key for instances that were previously believed to be unsolvable using lattice approaches.
This talk is based on joint work with Nadia Heninger available at https://github.com/malb/bdd-predicate.

Benjamin Wesolowski (CNRS, LFANT, Institut de mathématiques de Bordeaux, Université de Bordeaux)

March 11, 2021 at 10h15, online seminar

SQISign: Compact Post-Quantum Signature from Quaternions and Isogenies

We will present the signature scheme SQISign, (for Short Quaternion and Isogeny Signature) exploiting isogeny graphs of supersingular elliptic curves. Its main asset: the signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. On the other hand, its efficient implementation and security analysis present new research challenges.

Xavier Caruso (CNRS, LFANT, Institut de mathématiques de Bordeaux, Université de Bordeaux)

February 25, 2021 at 10h15, online seminar

Points de vue sur la p-courbure

Cet exposé sera consacré à un invariant fondamental des équations différentielles linéaires en caractéristique positive : la p-courbure. Je présenterai plusieurs points de vue complémentaires sur cet objet qui, mis ensemble, permettront d’appréhender la complexité de cette notion et les différents rôles qu’elle est appelée à jouer vers la compréhension de l’arithmétique des équations différentielles aussi bien en caractéristique positive qu’en caractéristique nulle. Je m’attarderai également sur les aspects algorithmiques en expliquant comment certains descriptions de la p-courbure conduisent naturellement à des méthodes de calcul efficaces de celle-ci.

Marc Mezzarobba (CNRS, Péquan, LIP6)

January 21, 2021 at 10h15, online seminar

Analyse d’erreur de récurrences linéaires à l’aide de séries génératrices

Lorsque l’on calcule itérativement les termes d’une suite récurrente linéaire (à coefficients variables) en arithmétique approchée, il est bien connu que les erreurs d’arrondi « se compensent » au lieu de simplement d’accumuler. Pour obtenir une borne réaliste sur l’erreur dont est entaché le n-ième terme, il faut absolument prendre en compte ce phénomène, et donc étudier finement la propagation d’une erreur d’arrondi à travers les itérations suivantes de la récurrence. Dans les notations classiques d’analyse réelle, cela conduit assez vite à des manipulations assez inextricables de sommes emboîtées.
Les combinatoriciens confrontés à ce genre de difficultés savent bien que coder une suite par sa série génératrice rend souvent les calculs plus digestes, tout en donnant accès à de puissants outils analytiques. Dans cet exposé, je montrerai à travers quelques exemples comment ce même langage des séries génératrices permet de mener à bien l’analyse d’erreur d’algorithmes numériques fondés sur divers types de récurrences linéaires.

Mohab Safey El Din (LIP6, Sorbonne Université)

January 14, 2021 at 10h15, online seminar

Polynomial system solving, kinematic singularities and eight connected components

We investigate the kinematic singularity analysis of families of industrial robots. We show how these boil down to difficult algorithmic problems such as answering connectivity queries in semi-algebraic sets and solving positive dimensional polynomial systems with parameters. We describe an algorithm, combining techniques of real algebraic geometry with computer algebra for solving these problems. Finally, we illustrate how it has been used on the “Universal Robots” series to “prove” that the complementary of their kinematic singularities is the union of eight connected components.   

 

This is joint work with Jose Capco and Josef Schicho.

Stephen Melczer (Department of Combinatorics and Optimization, University of Waterloo, Canada)

December 17, 2020 at 14h30, online seminar

Analytic Combinatorics, Rigorous Numerics, and Uniqueness of Biomembranes

Since the invention of the compound microscope in the early seventeenth century, scientists have marvelled over red blood cells and their surprising shape. An influential model of Canham predicts the shapes of blood cells and similar biomembranes come from a variational problem minimizing the “bending energy” of these surfaces. Because observed (healthy) cells have the same shape in humans, it is natural to ask whether the model admits a unique solution. Here, we prove solution uniqueness for the genus one Canham problem. The proof builds on a result of Yu and Chen that reduces solution uniqueness to proving non-negativity of a sequence defined by an explicit linear recurrence relation with polynomial coefficients. We combine rigorous numeric analytic continuation of D-finite functions with classic bounds from singularity analysis to derive an effective index where the asymptotic behaviour of the sequence, which is positive, dominates the sequence behaviour. Positivity of the finite number of remaining terms can then be checked computationally.
This is joint work with Marc Mezzarobba.

Alin Bostan (SpecFun, INRIA Saclay Île-de-France)

December 10, 2020 at 10h15, online seminar

A Simple and Fast Algorithm for Computing the N-th Term of a Linearly Recurrent Sequence

We present a simple and fast algorithm for computing the N-th term of a given linearly recurrent sequence. Our new algorithm uses O(M(d) log N) arithmetic operations, where d is the order of the recurrence, and M(d) denotes the number of arithmetic operations for computing the product of two polynomials of degree d. The state-of-the-art algorithm, due to Fiduccia (1985), has the same arithmetic complexity up to a constant factor. Our algorithm is simpler, faster and obtained by a totally different method. We also discuss several algorithmic applications, notably to polynomial modular exponentiation and powering of matrices. Joint work with Ryuhei Mori (Tokyo Institute of Technology, Japan).

Léo Ducas (CWI, Amsterdam, Netherlands)

November 26, 2020 at 10h15, online seminar

An Algorithmic Reduction Theory for Binary Codes: LLL and more

Lattice reduction is the task of finding a basis of short and somewhat orthogonal vectors of a given lattice. In 1985 Lenstra, Lenstra and Lovasz proposed a polynomial time algorithm for this task, with an application to factoring rational polynomials. Since then, the LLL algorithm has found countless application in algorithmic number theory and in cryptanalysis.
There are many analogies to be drawn between Euclidean lattices and linear codes over finite fields. In this work, we propose to extend the range of these analogies by considering the task of reducing the basis of a binary code. In fact, all it takes is to choose the adequate notion mimicking Euclidean orthogonality (namely orthopodality), after which, all the required notions, arguments, and algorithms unfold before us, in quasi-perfect analogy with lattices.
This is joint work with Thomas Debris-Alazard and Wessel van Woerden.

Jean-Michel Muller (AriC, LIP, ENS Lyon)

November 12, 2020 at 10h15, online seminar

Quick and dirty: immediately obtaining a rough approximation to \( \sin(x), \log(x), \arctan(y/x), \sqrt{x}\)…

We review some of the classical methods used for very quickly obtaining low-precision approximations to the elementary functions: shift-and-add algorithms, table-based methods, and “bit-manipulation” techniques.

Gleb Pogudin (MAX, LIX, École Polytechnique)

November 6, 2020 at 10h15, online seminar

Global parameter identifiability of ODE models

Many real-world processes and phenomena are modeled using systems of parametric ODEs. One might be interested in knowing the values of some select parameters due to their importance. Usually one tries to determine (uniquely identify) them by collecting input and output data. However, due to the structure of the model, it can be impossible to determine the parameters from input and output data. When this happens, the parameters are said to be “not globally identifiable”. In the process of model design, it is crucial to know whether the parameters of interest in a potential model are globally identifiable.
I will present several recent algorithms for analyzing parameter identifiability based on computational algebra.

2019-2020

Pierre Lairez (SpecFun, INRIA Saclay – Île-de-France)

June 25, 2020 at 10h15, online seminar

Calcul symbolique-numérique d’intégrales de volumes

Comment calculer le volume du lieu où un polynôme multivarié prend des valeurs positives ? Surtout, comment le faire mieux que les méthodes Monte-Carlo, qui procèdent par échantillonage ? Je montrerais trois méthodes.
La première, due à Henrion, Lasserre et Sarvognan, dans une variante due à Jasour, Hofmann et Williams, utilise une formulation en termes de moments pour ramener le calcul du volume à une suite de problèmes d’optimisation convexe.
La seconde, introduite par Mezzarobba, Safey El Din et moi-même, utilise l’intégration symbolique pour ramener le calcul du volume à la résolution numérique d’une équation différentielle linéaire.
Enfin, la troisième, encore en cours d’élaboration, par Berthomieu, Mezzarobba, Safey El Din et moi-même, combine les deux précédentes.

Tristan Vaccon (XLIM, Université de Limoges)

June 4 , 2020 at 10h15, online seminar

\( p \)-adic precision, examples and applications

\(p\)-adic numbers can usually only be handled with finite precision, which yields the problems of determining the smallest precision needed for a computation or the loss of precision per operation.
With X. Caruso and D. Roe, we have provided a method to handle precision over \(p\)-adics that relies on differentials and first-order approximation. It provides results that are (essentially) optimal and do not depend on the choice of algorithm.
We will present various illustrations of this technique: computation of determinants, characteristic polynomials, \(p\)-adic differential equations,etc…
We will also present a Sagemath implementation to compute automatically the optimal precision on a given computation.

Pascal Koiran (MC2, LIP) and Bruno Salvy (AriC, LIP)

March 12, 2020 at 10h15, Amphi B (3rd floor, Monod)

Bruno Salvy: Absolute root separation

The absolute separation of a polynomial is the minimum nonzero difference between the absolute values of its roots. In the case of polynomials with integer coefficients, it can be bounded from below in terms of the degree and the height (the maximum absolute value of the coefficients) of the polynomial. We improve the known bounds for this problem and related ones. Then we report on extensive experiments in low degrees, suggesting that the current bounds are still very pessimistic.
This is joint work with Yann Bugeaud, Andrej Dujella, Wenjie Fang and Tomislav Pejkovic.   

 

Pascal Koiran: Root separation for trinomials

The talk will be based on https://arxiv.org/abs/1709.03294. We give a separation bound for the complex roots of a trinomial \(f\) with integer coefficients. The logarithm of the inverse of our separation bound is polynomial in the size of the sparse encoding of \(f\); in particular, it is polynomial in \(\log(\deg f)\). It is known that no such bound is possible for 4-nomials (polynomials with 4 monomials). For trinomials, the classical results (which are based on the degree of \(f\) rather than the number of monomials) give separation bounds that are exponentially worse.
As an algorithmic application, we show that the number of real roots of a trinomial \(f\) can be computed in time polynomial in the size of the sparse encoding of \(f\). The same problem is open for 4-nomials.

Robin Larrieu (LMV, Université de Versailles)

February 13, 2020 at 10h15, room M7-315 (3rd floor, Monod)

Fast polynomial reduction for generic bivariate ideals

Let \( A, B \) in \(K[X,Y]\) be two bivariate polynomials over an effective field \(K\), and let \(G\) be the reduced Gröbner basis of the ideal \(I := ⟨A,B⟩\) generated by \(A\) and \(B\), with respect to some weighted-degree lexicographic order. Under some genericity assumptions on \(A, B,\) we will see how to reduce a polynomial with respect to \(G\) with quasi-optimal complexity, despite the fact that \(G\) is much larger than the intrinsic complexity of the problem. For instance, if \(A, B\) have total degree \(n\), that is \(O(n^2)\) coefficients, then \(G\) has \(O(n^3)\) coefficients but reduction can be done in time \( O(n^2)\).
We will consider two situations where these new ideas apply, leading to different algorithms:   

 

  • First, there is a class called “vanilla Gröbner bases” for which there is a so-called terse representation that, once precomputed, allows to reduce any polynomial \( P \) in time \( O(n^2)\). In this setting, assuming suitable precomputation, multiplication and change of basis can therefore be done in time \( O(n^2)\) in the quotient algebra \(K[X,Y] / ⟨A,B⟩\).
  • Then, we assume that \( A \) and \( B \) are given in total degree and we consider the usual degree lexicographic order. Although the bases are not vanilla in this case, they admit a so-called concise representation with similar properties. Actually, the precomputation can also be done efficiently in this particular setting: from the input \(A, B\), one can compute a Gröbner basis in concise representation in time \( O(n^2)\). As a consequence, multiplication in \( K[X,Y] / ⟨A,B⟩\) can be done in time \( O(n^2) \) including the cost of precomputation.

Laurence Rideau (STAMP, INRIA Sophia Antipolis – Méditerranée)

January 30, 2020 at 10h15, room M7-315 (3rd floor, Monod)

Formalisation in Coq of the correctness of double-word arithmetic algorithms and their errors bounds

This talk presents the formalisation in Coq of the article Tight and rigourous error bounds for basic building blocks of double-word arithmetic by M. Joldes, J.M. Muller and V. Popescu.
We show how this formalisation made it possible to highlight some errors and some inaccuracies in the proofs of the paper.
I will focus in particular on the dangers of the “wlog”, which is used extensively in this type of proofs.
We will also discuss the advantages and disadvantages of such formalization, and how this work has improved confidence in the results of the article, despite the errors detected, and has also improved the Flocq library (intensively used for it).

Miruna Rosca (AriC, LIP)

December 5, 2019 at 10h15, room M7-315 (3rd floor, Monod)

MPSign: A Signature from Small-Secret Middle-Product Learning with Errors

We describe a digital signature scheme MPSign, whose security relies on the conjectured hardness of the Polynomial Learning With Errors problem (PLWE) for at least one defining polynomial within an exponential-size family (as a function of the security parameter). The proposed signature scheme follows the Fiat-Shamir framework and can be viewed as the Learning With Errors counterpart of the signature scheme described by Lyubashevsky at Asiacrypt 2016, whose security relies on the conjectured hardness of the Polynomial Short Integer Solution (PSIS) problem for at least one defining polynomial within an exponential-size family. As opposed to the latter, MPSign enjoys a security proof from PLWE that is tight in the quantum-access random oracle model. The main ingredient is a reduction from PLWE for an arbitrary defining polynomial among exponentially many, to a variant of the MiddleProduct Learning with Errors problem (MPLWE) that allows for secrets that are small compared to the working modulus. We present concrete parameters for MPSign using such small secrets, and show that they lead to significant savings in signature length over Lyubashevsky’s Asiacrypt 2016 scheme (which uses larger secrets) at typical security levels. As an additional small contribution, and in contrast to MPSign (or MPLWE), we present an efficient key-recovery attack against Lyubashevsky’s scheme (or the inhomogeneous PSIS problem), when it is used with sufficiently small secrets, showing the necessity of a lower bound on secret size for the security of that scheme.
This is joint work with Shi Bai, Dipayan Das, Ryo Hiromasa, Amin Sakzad, Damien Stehlé, Ron Steinfeld and Zhenfei Zhang

Vincent Lefèvre (AriC, LIP)

November 21, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Accurate Complex Multiplication in Floating-Point Arithmetic

We deal with accurate complex multiplication in binary floating-point arithmetic, with an emphasis on the case where one of the operands is a “double-word” number. We provide an algorithm that returns a complex product with normwise relative error bound close to the best possible one, i.e., the rounding unit u. We also discuss variants of this algorithm.
This is a joint work with Jean-Michel Muller.

Fabien Laguillaumie (AriC, LIP)

November 14, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Threshold variant of the digital signature algorithm standard

(EC)DSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier’s cryptosystem.
In this talk, I will give some recent results to improve Lindell’s solution in terms of security and efficiency, and discuss some possible extension to a full threshold variant.
This is joint works with Guilhem Castagnos, Dario Catalano, Federico Savasta and Ida Tucker.

Mioara Joldes (CNRS LAAS, Toulouse)

November 7, 2019 at 10h15, room M7-315 (3rd floor, Monod)

An optimization viewpoint for machine-efficient polynomial approximations

Machine implementation of mathematical functions often relies on polynomial approximations. The particularity is that rounding errors occur both when representing the polynomial coefficients on a finite number of bits, and when evaluating it in finite precision. Hence, for finding the best polynomial (for a given fixed degree, norm and interval), one has to take into account both types of errors: approximation and evaluation. By considering a linearized evaluation error model, we formulate a semi-infinite linear optimization problem, whose solution can be obtained by an iterative exchange algorithm. This can be seen as an extension of the Remez algorithm. A discussion of the obtained results and further developments concludes this talk. This is joint work with F. Bréhard and D. Arzelier.

Geoffroy Couteau (CNRS and Paris 7)

October 24, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Efficient Pseudorandom Correlation Generators: Silent OT Extension and More

Secure multiparty computation (MPC) often relies on sources of correlated randomness for better efficiency and simplicity. This is particularly useful for MPC with no honest majority, where input-independent correlated randomness enables a lightweight “non-cryptographic” online phase once the inputs are known. However, since the amount of correlated randomness typically scales with the circuit size of the function being computed, securely generating correlated randomness forms an efficiency bottleneck, involving a large amount of communication and storage. A natural tool for addressing the above limitations is a pseudorandom correlation generator (PCG).
A PCG allows two or more parties to securely generate long sources of useful correlated randomness via a local expansion of correlated short seeds and no interaction. PCGs enable MPC with silent preprocessing, where a small amount of interaction used for securely sampling the seeds is followed by silent local generation of correlated pseudorandomness.
A concretely efficient PCG for Vector-OLE correlations was recently obtained by Boyle et al. (CCS 2018) based on variants of the learning parity with noise (LPN) assumption over large fields. In this work, we initiate a systematic study of PCGs and present concretely efficient constructions for several types of useful MPC correlations. We obtain the following main contributions:
– PCG foundations. We give a general security definition for PCGs. Our definition suffices for any MPC protocol satisfying a stronger security requirement that is met by existing protocols. We prove that a stronger security requirement is indeed necessary, and justify our PCG definition by ruling out a stronger and more natural definition.
– Silent OT extension. We present the first concretely efficient PCG for oblivious transfer correlations. Its security is based on a variant of the binary LPN assumption and any correlation-robust hash function. We expect it to provide a faster alternative to the IKNP OT extension protocol (Crypto ’03) when communication is the bottleneck. We present several applications, including protocols for non-interactive zero-knowledge with bounded-reusable preprocessing from binary LPN, and concretely efficient related-key oblivious pseudorandom functions.
– PCGs for simple 2-party correlations. We obtain PCGs for several other types of useful 2-party correlations, including (authenticated) one-time truth-tables and Beaver triples. While the latter PCGs are slower than our PCG for OT, they are still practically feasible. These PCGs are based on a host of assumptions and techniques, including specialized homomorphic secret sharing schemes and pseudorandom generators tailored to their structure.
– Multiparty correlations. We obtain PCGs for multiparty correlations that can be used to make the circuit-dependent communication of MPC protocols scale linearly (instead of quadratically) with the number of parties.

Benjamin Wesolowski (CWI Amsterdam)

October 10, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Discrete logarithms in quasi-polynomial time in finite fields of small characteristic

We prove that the discrete logarithm problem can be solved in quasi-polynomial expected time in the multiplicative group of finite fields of fixed characteristic. In 1987, Pomerance proved that this problem can be solve in expected subexponential time \(L(1/2)\). The following 30 years saw a number of heuristic improvements, but no provable results. The quasi-polynomial complexity has been conjectured to be reachable since 2013, when a first heuristic algorithm was proposed by Barbulescu, Gaudry, Joux, and Thomé. We prove this conjecture, and more generally that this problem can be solved in the field of cardinality \( p^n \) in expected time \((pn)^{2 \log_2 (n)+O(1)}\).

Théo Mary (University of Manchester, U.K.)

October 3, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Sharper and smaller error bounds for low precision scientific computing

With the rise of large scale, low precision computations, numerical algorithms having a backward error bound of the form \( nu \), for a problem size \( n \) and a machine precision \( u \), are no longer satisfactory. Indeed, with half precision arithmetic, such algorithms cannot guarantee even a single correct digit for problems larger than a few thousands. This has created a need for error bounds that are both sharper and smaller. In this talk, we will discuss recent advances towards this double goal. We will present new error analyses to obtain probabilistic bounds that are sharper on average, and new algorithms that achieve much smaller bounds without sacrificing high performance.

2018-2019

Miruna Rosca (AriC)

July 4, 2019 at 10h15, room M7-315 (3rd floor, Monod)

On the Middle-Product Learning With Errors Problem and its applications in cryptography

In this talk, we introduce a new variant MP-LWE of the Learning With Errors problem (LWE) making use of the middle product between polynomials modulo an integer q, we exhibit a reduction from the Polynomial-LWE problem (PLWE) parametrized by a polynomial f, to MP-LWE, which works for a large family of polynomials, and we analyze the applications of MP-LWE in cryptography.
This is joint work with A. Sakzad, D. Stehlé and R. Steinfeld.

François Morain (LIX, École polytechnique)

June 13, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Fractions continues avec des algorithmes rapides

Les fractions continues sont un outil très important pour l’approximation des nombres réels, avec de nombreuses applications en théorie algorithmique des nombres ainsi qu’en cryptanalyse. Une des applications importantes est l’algorithme de Cornacchia qui résout élégamment le problème de la représentation des nombres premiers sous la forme p=x^2+d y^2$ avec d > 0. Cet exposé présentera l’utilisation des algorithmes rapides de pgcd d’entiers pour fournir une version rapide de l’algorithme de Cornacchia.

Vincent Neiger (XLIM, Université de Limoges)

May 2, 2019 at 10h15, room M7-315 (3rd floor, Monod)

On the complexity of modular composition of generic polynomials

This talk is about algorithms for modular composition of univariate polynomials, and for computing minimal polynomials. For two univariate polynomials \(a\) and \(g\) over a commutative field, modular composition asks to compute \(h(a) \bmod g\) for some given \(h\), while the minimal polynomial problem is to compute \(h\) of minimal degree such that \( h(a) = 0 \bmod g\). For generic \(g\) and \(a\), we propose algorithms whose complexity bound improves upon previous algorithms and in particular upon Brent and Kung’s approach (1978); the new complexity bound is subquadratic in the degree of \(g\) and \(a\) even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals, and from efficient operations with these bases thanks to fast univariate polynomial matrix algorithms. We will also report on software development and comment on implementation results for the main building blocks in our composition algorithm.
Contains joint work with Seung Gyu Hyun, Bruno Salvy, Eric Schost, Gilles Villard.

Bruno Grenet (ECO, LIRMM)

April 11, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Multiplications polynomiales sans mémoire

Le problème de la multiplication de polynômes a été très étudié depuis les années 1960, et différents algorithmes ont été proposés pour atteindre les meilleures complexités en temps.
Plus récemment, certains de ces algorithmes ont été étudiés du point de vue de leur complexité en espace, et modifiés pour n’utiliser aucun espace supplémentaire autre que les entrées et sorties, tout en gardant la même complexité en temps asymptotiquement.
Dans ce travail, nous étendons ces résultats de deux façons. D’une part, nous nous demandons si tout algorithme de multiplication polynomiale admet une variante « en place », c’est-à-dire n’utilisant aucun espace supplémentaire, de manière générique. D’autre part, nous considérons deux variantes importantes de ce problème qui ne produisent qu’une partie du résultat, les produits dits court et médian, et nous nous demandons si ces opérations peuvent également être effectuées en place.
Pour répondre de manière (essentiellement) affirmative à ces deux questions, nous proposons une série de réductions ayant comme point de départ n’importe quel algorithme de multiplication de complexité en espace linéaire. Pour le produit complet et le produit court, ces réductions fournissent des variantes en place des algorithmes avec la même complexité en temps asymptotiquement. Pour le produit médian, la réduction implique un facteur logarithmique supplémentaire dans la complexité en temps, quand celle-ci est quasi-linéaire.
Travail en commun avec Pascal Giorgi et Daniel Roche

Damien Stehlé (AriC)

April 4, 2019 at 10h15, room M7-315 (3rd floor, Monod)

A survey on security foundations of fast lattice-based cryptography

The Learning With Errors problem (LWE) captures the asymptotic hardness of some standard lattice problems, and enables the design of cryptographic schemes. However, these LWE-based schemes are relatively inefficient.
To address this issue, algebraic variants of LWE have been introduced, such as Polynomial-LWE, Ring-LWE, Module-LWE and MiddleProduct-LWE, whose definitions involve polynomial rings and number fields.
In this talk, I will survey the state of the art on these problems.

Jean-Michel Muller (AriC)

March 7, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Error analysis of some operations involved in the Fast Fourier Transform

We are interested in obtaining error bounds for the classical FFT algorithm in floating-point arithmetic, for the 2-norm as well as for the infinity norm. For that purpose we also give some results on the relative error of the complex multiplication by a root of unity, and on the largest value that can take the real or imaginary part of one term of the FFT of a vector x, assuming that all terms of x have real and imaginary parts less than some value b.
This is a joint work with N. Brisebarre, M. Joldes, A.-M. Nanes and J. Picot.

Assia Mahboubi (Gallinette, INRIA Rennes – Bretagne Atlantique, LS2N)

February 14, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Formally Verified Approximations of Definite Integrals

In this talk, we discuss the problem of computing values of one-dimensional definite integrals, with the highest possible guarantee of correctness. Finding an elementary form for an antiderivative is often not an option, and numerical integration is a common tool when it comes to making sense of a definite integral. Some of the numerical integration methods can even be made rigorous: not only do they compute an approximation of the integral value but they also bound its inaccuracy. These rigorous algorithms are implemented in software like INTLAB, VNODE-LP, Arb, etc. But the highest possible guarantee of correctness on such approximations, even those obtained by rigorous means, would in fact be provided by a formal proofs, machine-checked using a proof assistant. Proof assistants are pieces of software for representing mathematical definitions, statements, algorithms and proofs in a digital form, suitable for computer processing. In particular, they can be used to devise formal-proof-producing implementations of programs. But numerical integration is still missing from the toolbox when performing formal proofs. We thus describe and discuss a method for automatically computing and proving bounds on some definite integrals, implemented inside the Coq proof assistant.
This is a joint work with Guillaume Melquiond and Thomas Sibut-Pinote.

Ida Tucker (AriC)

February 7, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Practical fully secure unrestricted inner product functional encryption modulo a prime p

Functional encryption (FE) is an advanced cryptographic primitive which allows, for a single encrypted message, to finely control how much information on the encrypted data each receiver can recover. To this end many functional secret keys are derived from a master secret key. Each functional secret key allows, for a ciphertext encrypted under the associated public key, to recover a specific function of the underlying plaintext.
However constructions for general FE are far from practical, or rely on non-standard and ill-understood cryptographic assumptions.
In this talk I will focus on the construction of efficient FE schemes for linear functions (i.e. the inner product functionality), and the framework in which our constructions hold. Such schemes yield many practical applications, and our constructions are the first FE schemes for inner products modulo a prime that are both efficient and recover the result whatever its size. I will also describe an instantiation of the framework in using class groups of imaginary quadratic fields.
This is a joint work with Guilhem Castagnos and Fabien Laguillaumie.

Éric Goubault (LIX, École Polytechnique)

January 24, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Finding Positive Invariants of Polynomial Dynamical Systems – some experiments

Synthetising positive invariants of non-linear ODEs, switched systems or even hybrid systems is a hard problem that has many applications, from control to verification. In this talk, I will present two « exercices de style » for dealing with it, revisiting the classical Lyapunov function approach. The first one is based on algebraic properties of polynomial differential systems (Darboux polynomials, when they exist), for finding polynomial, rational or even some log extensions to rational functions whose level sets or sub-level sets describe positive invariants of these systems, or provide interesting « change of bases » for describing their solutions. The second one is based on topological properties (Wazewski property, mostly) which ensure the existence, in some region of the state space, of a non-empty maximal invariant set. The interest is that there is then in general no need to find complicated functions for precisely describing the invariant set itself, instead we rather use simple template shapes in which a possibly very complicated invariant set lies. The topological criterion can be ensured by suitable SoS relaxations, for polynomial differential systems, that can be implemented using LMI solvers.

Alain Passelègue (AriC)

January 17, 2019 at 10h15, room M7-315 (3rd floor, Monod)

New candidate pseudorandom functions and their applications

In this talk, I will present new and simple candidate pseudorandom functions (PRFs) introduced in a recent work. In this work, we depart from the traditional approaches for building PRFs used in provable security or in applied cryptography by exploring a new space of plausible PRF candidates. Our guiding principle is to maximize simplicity while optimizing complexity measures that are relevant to advanced cryptographic applications. Our primary focus is on weak PRFs computable by very simple circuits (depth-2 ACC circuits).
The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw many natural connections between their hardness and questions in complexity theory or learning theory. On the applied side, the piecewise-linear structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). In particular, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity compared to protocols obtained by combining standard MPC protocols with practical PRFs (included MPC-friendly ones).
Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We give a candidate EI-PRF based on our main weak PRF candidate.   

 

Joint work with Dan Boneh, Yuval Ishai, Amit Sahai, and David J. Wu, published at TCC 2018

Chee Yap (New-York University)

January 9, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Subdivision Path Planning in Robotics: Theory and Practice

Motion planning is a fundamental problem in robotics. We propose to design path planners based on three foundations:
(1) The notion of “resolution-exact” planners. Conceptually, it avoids the zero problem of exact computation.
(2) The use of “soft predicates” for achieving such algorithms in the subdivision approach.
(3) The “feature-based technique” for constructing such soft predicates.
We formulate an algorithmic framework called “Soft Subdivision Search” (SSS) that incorporates these ideas. There are many parallels between our framework and the well-known Sampling or Probabilistic Roadmap framework. Both frameworks lead to algorithms that are
* practical
* easy to implement
* flexible and extensible
* with adaptive and local complexity
In contrast to sampling and previous resolution approaches, SSS confers strong theoretical guarantees, including halting.   

 

In a series of papers we demonstrated the power of these ideas, by producing planners for planar robots with 2, 3 and 4 degrees of freedom (DOF) that outperform or matches state-of-art sampling-based planners. Most recently, we produced a planner for two spatial robots (rod and ring) with 5 DOFs. Non-heuristic planners for such robots has been considered a challenge for the subdivision approach. We outline a general axiomatic theory underlying these results, including subdivision in non-Euclidean configuration spaces,

Joint work with Y.J. Chiang, C.H. Hsu, C. Wang, Z. Luo, B. Zhou, J.P. Ryan.

Elena Kirshanova (AriC)

December 13, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Practical sieving algorithms for the Shortest Vector Problem

In this talk I present recent results on sieving algorithms for the Shortest Vector Problem. First, I explain why this problem is important and how sieving algorithms work. Then, I present recent advances in memory efficient versions of sieving algorithms. I explain locality-sensitive techniques for these types of algorithms. The part of the talk is based on joint works with Gottfried Herold and Thijs Laarhoven. Finally, I present recent advances in practical aspects of sieving algorithm for SVP. I describe technical challenges that arise when one tries to make sieving algorithms practical, and how one can overcome some of them. This part of the talk is on-going work with Martin R. Albrecht, Leo Ducas, Gottfried Herold, Eamonn W. Postlethwaite, Marc Stevens.

Nicolas Brunie (Kalray)

December 6, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Overview of arithmetic at Kalray: metalibm and the rest

Kalray’s version of Metalibm “lugdunum” has recently been open sourced. It is an interesting tool to developp elementary functions. In this presentation we will present the tool and show how it can be used to explore the design space of a few elementary functions. Then we will present in more details how Metalibm is used at Kalray to developp both the next generation Hardware and the mathematical libraries through the example of CRC reduction and OpenCL-C (kernel code) elementary functions. Finally we will survey the arithmetic at Kalray outside Metalibm through a description of the next generation processor and what is envisioned for the future.

Sylvie Putot (LIX, École Polytechnique)

November 29, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Forward Inner-Approximated Reachability of Non-Linear Continuous Systems

We propose an approach for computing inner-approximations of reachable sets of dynamical systems defined by non-linear, uncertain, ordinary differential equations. This is a notoriously difficult problem, much more intricate than outer-approximations, for which there exist well known solutions, mostly based on Taylor models.  Our solution builds on rather inexpensive set-based methods, namely a generalized mean-value theorem combined with Taylor models outer-approximations of the flow and its Jacobian with respect to the uncertain inputs and parameters. The combination of such forward inner and outer Taylor-model based approximations can be used as a basis for the verification and falsification of properties of cyber-physical systems.

Radu Titiu (AriC and BitDefender)

November 22, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Adaptively secure PRFs from LWE

In distributed pseudorandom functions (DPRFs), a PRF secret key SK is secret shared among N servers so that each server can locally compute a partial evaluation of the PRF on some input X. A combiner that collects t partial evaluations can then reconstruct the evaluation F (SK, X) of the PRF under the initial secret key. So far, all non-interactive constructions in the standard model are based on lattice assumptions. One caveat is that they are only known to be secure in the static corruption setting, where the adversary chooses the servers to corrupt at the very beginning of the game, before any evaluation query. In this work, we construct the first fully non-interactive adaptively secure DPRF in the standard model. Our construction is proved secure under the LWE assumption against adversaries that may adaptively decide which servers they want to corrupt. We also extend our construction in order to achieve robustness against malicious adversaries.   

 

This is joint work with Benoit Libert and Damien Stehlé.

Martin Kumm (Uni. Kassel, Germany)

November 8, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Exact Computation of Monotonic Functions with Large Input Word Sizes using Look-Up Tables

The exact evaluation of arbitrary functions by using look-up tables (LUTs) is typically limited to small input word sizes. This is due to the fact that the storage requirements grow exponentially with the input word size \(N\) and linear with the output word size \(M\), i.e., \(O(2^N M)\). However, many applications require the computation of elementary functions with a large precision of the input argument but a lower precision of the result. One example is direct digital frequency synthesis (DDFS) with typically \(N=32..48\) bit and \(M=8..12\) bit. Another example are tone mapping methods for high-dynamic range (HDR) imaging with typ. \(N=16..19\) bit and \(M=8\) bit. In this talk, alternative architectures for evaluation of monotonic functions using LUTs are discussed which memory requirements scale linear with the input word size and exponentially with the output word size, i.e., \(O(2^M N)\). This is achieved by using \(M\) additional comparators or adders. First experimental results from FPGA synthesis show that this also translates to resource reductions for those applications where \(M\) is just larger than \(N\).

Silviu Filip (CAIRN, Inria Rennes Bretagne Atlantique)

October 25, 2018 at 10h15, room M7-315 (3rd floor, Monod)

A High Throughput Polynomial and Rational Function Approximations Evaluator

We present an automatic method for the evaluation of functions via polynomial or rational approximations and its hardware implementation, on FPGAs. These approximations are evaluated using Ercegovac’s iterative E-method adapted for FPGA implementation. The polynomial and rational function coefficients are optimized so that they satisfy the constraints of the E-method. We present several examples of practical interest; in each case a resource-efficient approximation is proposed and comparisons are made with alternative approaches.

Florent Bréhard (AriC)

October 18, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Rigorous Numerics for Function Space Problems and Applications to Aerospace

A wide range of numerical routines exist for solving function space problems (like ODEs, PDEs, optimization, etc.). But in most cases, one lacks information about the reliability of the results, e.g., how many of the returned digits are correct. While most applications focus on efficiency, some safety-critical tasks, as well as computer assisted mathematics, need rigorous mathematical statements about the computed result such as automatic tight error bounds.   

 

A relevant practical example occurs in the spacecraft rendezvous problem, which consists in determining the optimal command law for a spacecraft equipped with thrusters to be transferred from its original orbit to a target orbit within a given time interval. Computing rigorous trajectories is of high interest to guarantee a posteriori the correctness of the numerical command law returned by the optimization algorithm used to solve this problem.

In this talk we discuss a rigorous framework called Chebyshev models to provide validated enclosures of real-valued functions defined over a compact interval. After having presented the basic arithmetic operations on them, we focus on an algorithm that computes validated solutions of linear ordinary differential equations, specifically, approximate truncated Chebyshev series together with a rigorous uniform error bound. The method relies on an a posteriori validation based on a Newton-like fixed-point operator, which also exploits the almost-banded structure of the problem in an efficient way. We provide an open-source C implementation (https://gforge.inria.fr/projects/tchebyapprox/).

Finally, certified enclosures of spacecraft trajectories arising in the rendezvous problem will be computed using the tools introduced during the talk.

Archives of the seminar

For seminars in previous years, see former AriC seminars.

[Logo CNRS] [Logo ENS de Lyon] [Logo Inria] [Logo LIP]
[Logo UCB Lyon 1] [Logo Université de Lyon] [Logo Labex MILYON] [Logo Fédération Informatique de Lyon]

AriC project – Arithmetic and Computing