Seminar

The calendar of these seminars can be subscribed to by clicking here.

Next seminars

Maxime Breden (CMAP, École Polytechnique)

July, 4 2024 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Computer-assisted proofs for nonlinear equations: how to turn a numerical simulation into a theorem

The goal of a posteriori validation methods is to get a quantitative and rigorous description of some specific solutions of nonlinear equations, often ODEs or PDEs, based on numerical simulations. The general strategy consists in combining a priori and a posteriori error estimates, interval arithmetic, and a fixed point theorem applied to a quasi-Newton operator. Starting from a numerically computed approximate solution, one can then prove the existence of a true solution in a small and explicit neighborhood of the numerical approximation.
I will first present the main ideas behind these techniques on a simple example, then describe a rather general framework in which they can be applied, and finally showcase that these techniques can be adapted in a straightforward and efficient way to produce rigorous continuation results, allowing to prove the existence of branches of solutions, and to rule out bifurcations.

2023-2024

Orégane Desrentes (Kalray and INSA Lyon)

June, 27 2024 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Hardware implementation of linear algebra operators for small floating point formats

In this talk I will describe some state of the art hardware operators for floating point linear algebra, as well as discuss some of the difficulties when designing such operators. New standards for small 8 bit floating point formats are emerging, including “block floating point” formats, creating a need for new operators. The particularly small size of these formats enables us to choose architectures that are unsuitable for classic floating point formats.

Cyril Cohen (Inria)

June, 13 2024 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Measure Theory using Hierarchy Builder in the Rocq proof assistant

In this talk I will present the Hierarchy Builder Domain Specific Language for Coq, and illustrate its use in building measure theory and the Lebesgue measure in a concise way.

Mantas Mikaitis (University of Leeds)

April, 11 2024 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Monotonicity of Multi-Term Floating-Point Adders

In the literature on algorithms for performing the multi-term addition using floating-point arithmetic it is often shown that a hardware unit that has single normalization and rounding improves precision, area, latency, and power consumption, compared with the use of standard add or fused multiply-add units. However, non-monotonicity can appear when computing sums with a subclass of multi-term addition units, which currently is not explored in the literature. We demonstrate that common techniques for performing multi-term addition of more than four terms, without normalization of intermediate quantities, can result in non-monotonicity–increasing one of the addends decreases the sum. We prove, for various ways of building multi-term adders, if monotonicity is preserved or not.

Pascal Koiran (MC2, LIP, ÉNS Lyon)

April, 4 2024 at 10h15, Amphi C (GN1, Monod)

On the uniqueness and computation of commuting extensions

A tuple \( (Z_1,…,Z_p) \)  of matrices of size \(r\) is said to be a commuting extension of a tuple \( (A_1,…,A_p) \) of matrices of size \( n < r \) if the \( Z_i \) pairwise commute and each \( A_i \) sits in the upper left corner of a block decomposition of \( Z_i\).
This notion was discovered and rediscovered in several contexts including algebraic complexity theory (in Strassen’s work on tensor rank), in numerical analysis for the construction of cubature formulas  and  in quantum mechanics for the study of computational methods and the study of the so-called “quantum Zeno dynamics.”
 We present 3 types of results:
(i) Theorems on the uniqueness of commuting extensions for three matrices or more.
(ii) Algorithms for the computation of commuting extensions of minimal size. These algorithms work under the same assumptions as our uniqueness theorems. They are applicable up to \( r=4n/3\), and are apparently the first provably efficient algorithms for this problem applicable beyond \( r=n+1 \).
(iii) A genericity theorem showing that our algorithms and uniqueness theorems can be applied to a wide range of input matrices.
The talk will be based on https://arxiv.org/pdf/2401.01302.pdf

Guillaume Melquiond (Toccata, INRIA Saclay)

March, 28 2024 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

End-to-End Formal Verification of a Fast and Accurate Floating-Point Approximation

Designing an efficient yet accurate floating-point approximation of a mathematical function is an intricate and error-prone process. This warrants the use of formal methods, especially formal proof, to achieve some degree of confidence in the implementation. Unfortunately, the lack of automation or its poor interplay with the more manual parts of the proof makes it way too costly in practice. This talk revisits the issue by proposing a methodology and some dedicated automation, and applies them to the use case of a faithful binary64 approximation of exponential. The peculiarity of this use case is that the target of the formal verification is not a simple modeling of an external code, it is an actual floating-point function defined in the logic of the Coq proof assistant, which is thus usable inside proofs once its correctness has been fully verified. This function presents all the attributes of a state-of-the-art implementation: bit-level manipulations, large tables of constants, obscure floating-point transformations, exceptional values, etc. This function has been integrated into the proof strategies of the CoqInterval library, bringing a 20x speedup with respect to the previous implementation.

Silviu-Ioan Filip (Taran, IRISA, INRIA Rennes – Bretagne Atlantique)

March, 14 2024 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Mixed Precision Approaches for DNN Compression and Acceleration

Deep Neural Network (DNN) models offer state-of-the-art results in a plethora of application areas, with their societal and economical impact looking poised to only grow in the years to come. Some of the major challenges in adopting DNN models in practice are related to energy efficiency, performance and cost. An extreme example is that of large language models, which require significant resources to train and deploy.
One effective way of decreasing the pressure is at the arithmetic and data representation levels, with small number formats and precisions offering significant benefits, but at the danger of lost accuracy and useless outputs. In this talk I will overview some of the main approaches and challenges in applying mixed precision compute and data compression to DNN training and inference workloads, highlighting recent and on-going work I am a part of.

Alaa Ibrahim (AriC, LIP, ENS Lyon)

February, 15 2024 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Positivity Certificates for Linear Recurrences

We consider linear recurrences with polynomial coefficients of Poincaré type and with a unique simple dominant eigenvalue. We give an algorithm that proves or disproves positivity of solutions provided the initial conditions satisfy a precisely defined genericity condition. For positive sequences, the algorithm produces a certificate of positivity that is a data-structure for a proof by induction. This induction works by showing that an explicitly computed cone is contracted by the iteration of the recurrence. This is a joint work with Bruno Salvy.

Carlos Beltrán (Universidad de Cantabria, Spain)

December, 7 2023 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Do we have a sound theory of stability for floating point algorithms?

There are some apparently simple questions that still do not have a globally accepted answer. Surprisingly, one of them is “what is a stable algorithm”? As a result of the lack of formalism in some usual answers to this and similar questions, practicioners frequently use mantras such as the known as “the infamous rule of thumb” of Numerical Analysis. In this talk I will try to convince you that there is essentially a unique reasonable answer to the question above and that this answer actually solves a lot of very basic but also very important doubts. All in one, I will present a theory of stability of algorithms which is at the same time simple enough as to be used and formal enough as to be correct. This is joint work with Vanni Noferini and Nick Vannieuvenhowen.

Rémi Prébet (LIP6, Sorbonne Université)

November, 23 2023 at 10h15, online seminar and salle du conseil du LIP (3rd floor, Monod)

Connectivity in real algebraic sets: algorithms and applications

Computational real algebraic geometry is a field at the interface of mathematics and computer science, that deals with algorithmic problems on real solution sets (which are basic semi-algebraic sets) to systems of polynomial constraints with real coefficients.
I will first show how classical subroutines of this field, such as computing one point in each connected component or deciding connectivity queries, can be used to solve challenging problems in robotics, such as cuspidality decisions. Then, I will present new algorithmic improvements to the problem of answering connectivity queries in real algebraic sets by using the notion of roadmaps, which allows to reduce such problems to the case of connectivity queries over real algebraic curves. Throughout the talk, we will pay close attention to the complexity of the algorithms under consideration.
This talk gathers joint works with D. Chablat, N. Islam, A. Poteaux, M. Safey El Din, D. Salunkhe, É. Schost and P. Wenger.

2022-2023

Michael Overton (Courant Institute, New York University)

June, 28 2023 at 16h15, Amphi B (3rd floor, Monod)

Crouzeix’s Conjecture

Crouzeix’s conjecture is among the most intriguing developments in matrix theory in recent years. Made in 2004 by Michel Crouzeix, it postulates that, for any polynomial \(p\) and any matrix A, \(\|p(A)\| \le 2 \max\{|p(z)|: z \in W(A)\}\), where the norm is the 2-norm and \( W(A) \) is the field of values (numerical range) of \(A\), that is the set of points attained by \( v^*Av \) for some vector \(v\) of unit length. Crouzeix proved in 2007 that the inequality above holds if \(2\) is replaced by \(11.08\), and in 2016 this was greatly improved by Palencia, replacing \( 2 \) by \( 1+\sqrt{2} \). Furthermore, it is known that the conjecture holds in a number of special cases. We use nonsmooth optimization to investigate the conjecture numerically by locally minimizing the “Crouzeix ratio”, defined as the quotient with numerator the right-hand side and denominator the left-hand side of the conjectured inequality. We use Chebfun to compute the boundary of the fields of values and BFGS for the optimization, computing the Crouzeix ratio at billions of data points. We also present local nonsmooth variational analysis of the Crouzeix ratio at conjectured global minimizers. All our results strongly support the truth of Crouzeix’s conjecture.

Gilles Villard (LIP, ENS Lyon)

June, 22 2023 at 10h15, online seminar and salle du conseil du LIP (3rd floor, Monod)

Elimination ideal (and bivariate resultant) in quasi-linear time over finite fields

A new algorithm is presented in arXiv:2302.08891 for computing the largest degree invariant factor of the Sylvester matrix (with respect either to \(x\) or \(y\)) associated to two polynomials \(a\) and \(b\) in \(\mathbb F_q[x,y]\) which have no non-trivial common divisors. The algorithm is randomized of the Monte Carlo type and requires \((de\log q)^{1+o(1)}\) bit operations, where \(d\) and \(e\) respectively bound the input degrees in \(x\) and in \(y\). It follows that the same complexity estimate is valid for computing: a generator of the elimination ideal \(\langle a,b \rangle \cap \mathbb F_q[x]\) (or \(\mathbb F_q[y]\)), as long as the polynomial system \(a=b=0\) has not roots at infinity; the resultant of \(a\) and \( b \) when they are sufficiently generic.

Our approach is to use the reduction of the problem to a problem of minimal polynomial in the quotient algebra \(\mathbb F_q[x,y]/\langle a,b \rangle\). By proposing a new method based on structured polynomial matrix division for computing with the elements in the quotient, we manage to improve the best known complexity bounds.

We present the main lines of the algorithm, and more particularly the underlying computations modulo the ideal generated by \(a\) and \(b\).

Marc Mezzarobba (CNRS, LIX, MAX)

June, 15 2023 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Asymptotic Expansions with Error Bounds for Solutions of Linear Recurrences

When a sequence \( (u(n)) \) of real or complex numbers satisfies a linear recurrence relation with polynomial coefficients, it is usually routine to determine its asymptotic behavior for large \( n\). Well-known algorithms and readily available implementations are often able, given a recurrence satisfied by \( (u(n)) \) and some additional information on the particular solution, to compute an explicit asymptotic expansion \[ u(n) = a_0(n) + a_1(n) + ··· + a_{r-1}(n) + O(b(n)), n \rightarrow ∞ \] up to any desired order \(r\). If, however, one wishes to prove an inequality satisfied by \(u(n)\) for all \(n\), a big-Oh error term is not sufficient and an explicit error bound on the difference between the sequence and its asymptotic approximation is required. In this talk, I will present a practical algorithm for computing such bounds, under the assumption that the generating series of the sequence \( (u(n)) \) is solution of a differential equation with regular (Fuchsian) dominant singularities. I will show how it can be used to verify the positivity of a certain explicit sequence, which T. Yu and J. Chen recently proved to imply uniqueness of the surface of genus one in \( \mathbb{R}^3 \) of minimal bending energy when the isoperimetric ratio is prescribed.

This is based on joint work with Ruiwen Dong and Stephen Melczer.

Jérémy Berthomieu (PolSys, LIP6, Sorbonne Université)

May, 11 2023 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Guessing linear recurrence relations for polynomial system solving

In this talk, we focus on the link between multivariate polynomials and linear recurrence relations satisfied by a multi-indexed sequence to compute Gröbner bases.
First, we shall present msolve, a new open source C library, for solving polynomial systems using Gröbner bases. Second, we describe new algorithms and complexity estimates for computing Gröbner bases either for a total degree order or the lexicographic one. Then, we present linear algebras-based and polynomial-division-based algorithms for guessing linear recurrences with constant or polynomial coefficients, in generic and structured situations.
This is based on joint works with Alin Bostan, Christian Eder, Jean-Charles Faugère, Andrew Ferguson, Vincent Neiger and Mohab Safey El Din

Fredrik Johansson (LFANT, INRIA Bordeaux)

March, 16 2023 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Faster computation of elementary functions

Over a decade ago, Arnold Schönhage proposed a method to compute elementary functions (exp, log, sin, arctan, etc.) efficiently in “medium precision” (up to about 1000 digits) by reducing the argument using linear combinations of pairs of logarithms of primes or Gaussian primes. We generalize this approach to an arbitrary number of primes (which in practice may be 10-20 or more), using an efficient algorithm to solve the associated Diophantine approximation problem. Although theoretically slower than the arithmetic-geometric mean (AGM) by a logarithmic factor, this is now the fastest algorithm in practice to compute elementary functions from about 1000 digits up to millions of digits, giving roughly a factor-two speedup over previous methods. We also discuss the use of optimized Machin-like formulas for simultaneous computation of several logarithms or arctangents of rational numbers, which is required for precomputations.

Alexandre Goyer (MATHEXP, Inria Saclay)

January, 26 2023 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Seminumeric Approach to Dealing with Differential Operators

In analogy with the usual Galois group of a polynomial, you can define the differential Galois group of a linear ordinary differential equation [1]. This group encodes all the algebraic and/or differential relations between the solutions of the equation. There is a theoretical algorithm for computing it [2] seemingly to complex to be worth implemented. However the numerical evaluation of the solutions provides an efficient way to numerically approach some elements of this group. In 2007, van der Hoeven [3] proposed to use this for the exact computation of the differential Galois group and he gave a seminumeric algorithm for the factorization of differential operators. Here the term “seminumeric“ means that approximate numeric computations are involved although the output is exact.
First I will explain how we revisited the seminumeric factorization algorithm [4]. Secondly, if time allows, I will show some preliminary observations that should lead to seminumeric algorithms for deciding the algebraicity of the solutions. The second part will be illustrated by some worksheets using my code for the factorization of differential operators. Based on joint work and discussions with Moulay Barkatou, Frédéric Chyzak, Thomas Cluzeau, Marc Mezzarobba, Sergey Yurkevich and Jacques-Arthur Weil.

References:
[1] Introduction to the Galois Theory of Linear Differential Equations, M. F. Singer, 2007
[2] Computing the Galois Group of a Linear Differential Equation, E. Hrushovski, 2002
[3] Around the Numeric-Symbolic Computation of Differential Galois Groups, J. van der Hoeven, 2007
[4] Symbolic-Numeric Factorization of Linear Differential Operators, F. Chyzak, A. Goyer, M. Mezzarobba, 2022

Alhussein Fawzi (Google DeepMind, London, UK)

January, 19 2023 at 10h15, online seminar

Discovering faster matrix multiplication algorithms with deep reinforcement learning

Improving the efficiency of algorithms for fundamental computational tasks such as matrix multiplication can have widespread impact, as it affects the overall speed of a large amount of computations. The automatic discovery of algorithms using machine learning offers the prospect of reaching beyond human intuition and outperforming the current best human-designed algorithms. In this talk I’ll present AlphaTensor, our reinforcement learning agent based on AlphaZero for discovering efficient and provably correct algorithms for the multiplication of arbitrary matrices. AlphaTensor discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes. Particularly relevant is the case of 4 × 4 matrices in a finite field, where AlphaTensor’s algorithm improves on Strassen’s two-level algorithm for the first time since its discovery 50 years ago. I’ll present our problem formulation as a single-player game, the key ingredients that enable tackling such difficult mathematical problems using reinforcement learning, and the flexibility of the AlphaTensor framework.

Armelle Perret du Cray (ECO, LIRMM, Université de Montpellier)

December, 8 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Interpolation de polynômes creux sur les entiers

L’exposé présente un nouvel algorithme aléatoire de type Monte Carlo pour reconstruire un polynôme creux \(f(x)\) à coefficients entiers étant donné un moyen d’évaluer \( f(a)\mod m\) pour toute paire d’entiers \((a,m)\). Cet algorithme a une complexité quasi-optimal à savoir que la taille binaire des évaluations plus le coût des opérations de reconstruction est quasi-linéaire dans la taille binaire du polynôme \(f\). Les meilleurs algorithmes précédents avaient une dépendance cubiques dans la taille binaire des exposants. Comme application, cet algorithme d’interpolation est adapté pour calculer des opérations arithmétiques notamment division avec un reste nul de deux polynômes.
Ceci provient de travaux effectués avec Pascal Giorgi, Bruno Grenet et Daniel S. Roche.

Damien Stehlé (AriC, LIP, ENS Lyon)

November, 24 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Fiat-Shamir lattice signatures: Dilithium and perspectives

There exist two main approaches for designing efficient digital signatures that are secure under intractability assumptions on Euclidean lattices: hash-and-sign, inspired from RSA signatures, and Fiat-Shamir, inspired from Schnorr signatures. Both led to schemes recently selected by NIST in its post-quantum cryptography standardization process: Falcon and Dilithium.
In this talk, I will give an overview of Fiat-Shamir lattice signatures and their security, focusing on Dilithium. If time permits, I will also describe ongoing works towards improving the efficiency.

Warwick Tucker (AriC, LIP, ENS Lyon and Monash Univ., Australia)

October, 27 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Lower bounds on the Hausdorff dimensions of Julia sets

We present an algorithm for a rigorous computation of lower bounds on the Hausdorff dimensions of Julia sets for a wide class of holomorphic maps. We apply this algorithm to obtain lower bounds on the Hausdorff dimension of the Julia sets of some infinitely renormalizable real quadratic polynomials, including the Feigenbaum polynomial \( p_{Feig}(z) = z^2 + c_{Feig}\). In addition to that, we construct a piecewise constant function on \([−2, 2]\) that provides rigorous lower bounds for the Hausdorff dimension of the Julia sets of all quadratic polynomials \( p_c (z) = z^2 + c\) with \( c \in [−2, 2]\). Finally, we verify the conjecture of Ludwik Jaksztas and Michel Zinsmeister that the Hausdorff dimension of the Julia set of a quadratic polynomial \( p_c (z) = z^2 + c\), is a \(C^1\)-smooth function of the real parameter c on the interval \(c \in (c_{Feig} , −3/4)\). This is joint work with Artem Dudko and Igors Gorbovickis.

Thomas Espitau (NTT Secure Platform Laboratories, Tokyo)

October, 14 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

On hash-and-sign lattice based signatures scheme: GPV, Falcon and beyond.

In this talk, we propose some recall on the construction of hash and sign signatures schemes (GPV-based) on structured lattice, à la NTRU. We then propose and study several directions of optimizations, simplifications and tuning to enhance this base rationale. In particular, we will study the impact of small modulus and distorted sampling, but also see several tailoring of the key generation and ad-hoc type sampling based on root lattices.

Christoph Koutschan (Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Austria)

October, 13 2022 at 10h15, online seminar

Guessing with Little Data

Reconstructing a hypothetical recurrence equation from the first terms of an infinite sequence is a well-known technique in experimental mathematics, also referred to as “guessing”. We combine the classical linear-algebra approach to guessing with lattice reduction, which in many instances allows us to find the desired recurrence using fewer input terms. We have successfully applied our method to sequences from the OEIS and have identified several examples, for which it would have been very difficult to obtain the same result with the traditional approach. This is joint work with Manuel Kauers.

Klara Nosan (IRIF, Université Paris Cité)

September, 22 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

The Membership Problem for Hypergeometric Sequences with Rational Parameters

We investigate the Membership Problem for hypergeometric sequences: given a hypergeometric sequence \( \langle u_n \rangle_{n = 0}^\infty \) of rational numbers and a target \( t \in \mathbb{Q}\), decide whether \(t\) occurs in the sequence. We show decidability of this problem under the assumption that in the defining recurrence \(p(n) u_{n+1} = q(n) u_{n}\), the roots of the polynomials \(p(x)\) and \(q(x)\) are all rational numbers. Our proof relies on bounds on the density of primes in arithmetic progressions. We also observe a relationship between the decidability of the Membership problem (and variants) and the Rohrlich-Lang conjecture in transcendence theory.
This work is in collaboration with Amaury Pouly, Mahsa Shirmohammadi and James Worrell. The full version of the paper is available at https://arxiv.org/abs/2202.07416.

2021-2022

Julien Devevey (AriC, LIP, ENS Lyon)

June, 23 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

On Rejection Sampling in Lyubashevsky’s Signature Scheme

Lyubashevsky’s signatures are based on the Fiat-Shamir with aborts paradigm, whose central ingredient is the use of rejection sampling to transform secret-dependent signature samples into samples from (or close to) a secret-independent target distribution. Several choices for the underlying distributions and for the rejection sampling strategy can be considered. In this work, we study Lyubashevsky’s signatures through the lens of rejection sampling, and aim to minimize signature size given signing runtime requirements. Several of our results concern rejection sampling itself and could have other applications. We prove lower bounds for compactness of signatures given signing run-time requirements, and for expected runtime of perfect rejection sampling strategies. We also propose a Rényi-divergence-based analysis of Lyubashevsky’s signatures which allows for larger deviations from the target distribution, and show hyperball uniforms to be a good choice of distributions: they asymptotically reach our compactness lower bounds and offer interesting features for practical deployment. Finally, we propose a different rejection sampling strategy which circumvents the expected runtime lower bound and provides a worst-case runtime guarantee.
This is joint work with Omar Fawzi, Alain Passelègue and Damien Stehlé.

Alain Passelègue (INRIA, AriC, LIP, ENS Lyon)

May, 19 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

New Constructions of Statistical NIZKs: Dual-Mode DV-NIZKs and More

Non-interactive zero-knowledge proofs (NIZKs) are important primitives in cryptography. A major challenge since the early works on NIZKs has been to construct NIZKs with a statistical zero-knowledge guarantee against unbounded verifiers. In the common reference string (CRS) model, such “statistical NIZK arguments” are currently known from k-Lin in a pairing-group and from LWE. In the (reusable) designated-verifier model (DV-NIZK), where a trusted setup algorithm generates a reusable verification key for checking proofs, we also have a construction from DCR. If we relax our requirements to computational zero-knowledge, we additionally have NIZKs from factoring and CDH in a pairing group in the CRS model, and from nearly all assumptions that imply public-key encryption (e.g., CDH, LPN, LWE) in the designated-verifier model. Thus, there still remains a gap in our understanding of statistical NIZKs in both the CRS and the designated-verifier models.
In this work, we develop new techniques for constructing statistical NIZK arguments. First, we construct statistical DV-NIZK arguments from the k-Lin assumption in pairing-free groups, the QR assumption, and the DCR assumption. These are the first constructions in pairing-free groups and from QR that satisfy statistical zero-knowledge. All of our constructions are secure even if the verification key is chosen maliciously (i.e., they are “malicious-designated-verifier” NIZKs), and moreover, they satisfy a “dual-mode” property where the CRS can be sampled from two computationally indistinguishable distributions: one distribution yields statistical DV-NIZK arguments while the other yields computational DV-NIZK proofs. We then show how to adapt our k-Lin construction in a pairing group to obtain new publicly-verifiable statistical NIZK arguments from pairings with a qualitatively weaker assumption than existing constructions of pairing-based statistical NIZKs.
Our constructions follow the classic paradigm of Feige, Lapidot, and Shamir (FLS). While the FLS framework has traditionally been used to construct computational (DV)-NIZK proofs, we show that the same framework can be leveraged to construct dual-mode (DV)-NIZKs.
Joint work with Benoît Libert, Hoeteck Wee, and David J. Wu.

Hippolyte Signargout (LIP, ENS Lyon and LJK, Université Grenoble Alpes)

May, 12 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Computing the resultant of bivariate polynomials with structured high-order lifting

Operations on Toeplitz-like matrices can be reduced to polynomial operations. Toeplitz-like matrices with polynomial coefficients, underlying problems on bivariate polynomials, thus show a double structure which is still little studied. Methods based on the ”baby steps, giant steps” paradigm were recently used to speed up the computation of the characteristic polynomial of Toeplitz-like matrices with constant coefficients. We adapt this approach to the computation of the resultant of bivariate polynomials by using high-order lifting for the giant steps.
This is joint work with Clément Pernet (LJK) and Gilles Villard (LIP).

Le Quoc Tung (Ockham, LIP, ENS Lyon)

May, 5 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Sparse Matrix Factorization with Fixed Support

The problem of approximating a dense matrix by a product of sparse factors is a fundamental problem of many computer science tasks. It can be decomposed into two subproblems: finding the position of the nonzero coefficients in the sparse factors (or equivalently finding the support of the factors) and determining their values. While the first problem is usually seen as more challenging due to its combinatorial nature, this talk will focus on the second, referred to as Fixed Support sparse Matrix Factorization (FSMF). We present several interesting properties of (FSMF), notably from two points of view: 1) In terms of complexity: (FSMF) is NP-hard which shows that even if one knows the support factor, sparse matrix factorization remains challenging; 2) In terms of optimization: Despite the NP-hardness, there exists a family of instances of (FSMF) can be polynomially tractable and their underline geometry allows optimization techniques to converge to the optimal solutions.
This is joint work with Elisa Riccietti and Rémi Gribonval.

Balthazar Bauer (IRIF, Paris)

April, 7 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Transferable E-cash: An analysis in the algebraic group model

As the digitization of global communications accelerates, there is a growing awareness of privacy issues among large segments of society. In particular, the confidentiality of commercial transactions is under debate. In the 1980s and 1990s, Chaum, a cryptographer, developed and then deployed with his company Digicash an “anonymous” digital payment system whose guarantees were based, among other things, on mathematical proof inspired by the still nascent paradigm of “proven security”. Although his company Digicash goes bankrupt after a few years, the academic community continues to study anonymous digital payment systems, seeking to develop new functionalities, including transferability.
Our main goal was to revisit security models concerning the anonymity of payments, and then to propose a theoretically deployable payment system also based on the paradigm of proven security. During the presentation, I will talk about security models guaranteeing anonymity, as well as more fundamental questions concerning a family of cryptographic assumptions commonly used in provable security.

Théo Mary (CNRS LIP6, Sorbonne Université)

March, 24 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Blocked summation, GPU tensor cores, probabilistic analysis, and how it all ties together with multiword arithmetic

Standard worst-case error bounds in linear algebra computations with n x n matrices and a floating-point arithmetic with unit roundoff u are proportional to nu. With ever increasing dimensions n and the larger unit roundoffs of low precision arithmetics, should we be worried about the accuracy of computations? I will give three reasons why computations are in practice more accurate than the worst-case bound suggests: blocked algorithms, computing units with high precision accumulators such as GPU tensor cores, and statistical effects in the accumulation of rounding errors. Interestingly, all three factors tie together in recently proposed multiword matrix multiplication algorithms. Implemented on GPU tensor cores, these algorithms turn out to be much less accurate than expected: we explain the issue thanks to probabilistic analysis, and we cure it thanks to blocked summation!

Clément Pernet (LJK, Université Grenoble Alpes)

March, 10 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Deterministic computation of the characteristic polynomial in the time of matrix multiplication

This talk describes joint work with Vincent Neiger on an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic algorithm was obtained by Keller-Gehrig in 1985 and involves logarithmic factors. The new algorithm computes more generally the determinant of a univariate polynomial matrix in reduced form.

Philippe Dumas (SpecFun, INRIA Saclay)

February, 10 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Hypergeometric solutions for linear Mahler equations

Mahler’s linear equations use the substitution operator \( y(x)\mapsto y(x^b) \) with \( b \geq 2 \) an integer in the same way that linear recurrences use the shift \( y(x)\mapsto y(x+1)\).
These equations are a tool for the proof of transcendence of certain formal power series appearing as ordinary generating functions of sequences related to the divide-and-conquer strategy or produced by automata.
We are looking for solutions of these equations that are analogues of hypergeometric sequences in the context of recurrences in the shift. In other words, we look for solutions of a given equation which are themselves solutions of equations of order 1, \(y(x) = u(x) y(x^b)\) with \(u(x)\) a ramified rational function a priori unknown. For this purpose we develop two approaches. The first one mimics the Gosper-Petkovšek method for finding hypergeometric solutions of a linear recurrence. The second one uses Hermite-Padé approximants and experimentally provides a more efficient way than the first approach.

Magali Bardet (LITIS, Université de Rouen)

February, 3 2022 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

Algebraic decoding of Fqm-linear codes in rank metric

Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. This fundamental problem is called the Minrank problem, and is ubiquitous in rank metric (or even Hamming metric) code based cryptography as well as in multivariate cryptography. For structured instances arising in the former, their security rely on a more specific problem, namely the Rank Syndrome Decoding problem. There is also a generalization called the Rank Support Learning problem, where the attacker has access to several syndromes corresponding to errors with the same support. Those problems have various applications in code-based and multivariate cryptography (KEM and signature schemes), and a precise understanding of the complexity of solving them can help designers to create secure parameters.
In this talk, I will present the three problems and their relations to cryptographic schemes, their algebraic modeling and the recent improvements in the understanding of the complexity of solving those systems using algebraic techniques like Gröbner bases computations.
This gathers joint works with P. Briaud, M. Bros, D. Cabarcas, P. Gaborit, V. Neiger, R. Perlner, O. Ruatta, D. Smith-Tone, J.-P. Tillich.

Éric Goubault (LIX, École Polytechnique)

January, 27 2022 at 10h15, online seminar

Verification of ReLU neural networks with tropical polyhedra

In this presentation, we are going to discuss the problem of range analysis for feedforward neural networks, which is a basic primitive for applications such as robustness of neural networks, compliance to specifications and reachability analysis of neural-network feedback systems. Our approach focuses on ReLU (rectified linear unit) feedforward neural nets that present specific difficulties: approaches that exploit derivatives do not apply in general, the number of patterns of neuron activations can be quite large even for small networks, and convex approximations are generally too coarse. In this presentation, we will employ set-based methods and abstract interpretation that have been very successful in coping with similar difficulties in classical program verification. We will present an approach that abstracts ReLU feedforward neural networks using tropical polyhedra. We will show that tropical polyhedra can efficiently abstract ReLU activation function, while being able to control the loss of precision due to linear computations. We show how the connection between ReLU networks and tropical rational functions can provide approaches for range analysis of ReLU neural networks.
This is joint work with Sylvie Putot.

Stef Graillat (PEQUAN, LIP6, Sorbonne Université)

December, 16 2021 at 10h15, online seminar

On a compensated Ehrlich-Aberth method for the accurate computation of all polynomial roots

In this talk, we will use the complex compensated Horner’s method to derive a compensated Ehrlich-Aberth method for the accurate computation of all roots of a polynomial. In particular, under suitable conditions, we will show that the limiting accuracy for the compensated Ehrlich-Aberth iterations is as accurate as if computed in twice the working precision and then rounded into the working precision. Moreover, we will present a running error bound for the complex compensated Horner’s and use it to form robust stopping criteria for the compensated Ehrlich-Aberth iterations. Finally, extensive numerical experiments will illustrate that the backward and forward errors of the root approximations computed via the compensated Ehrlich-Aberth method are similar to those obtained with a quadruple precision implementation of the Ehrlich-Aberth method with a significant speed-up in terms of the computation time.

Huu Phuoc Le (PolSys, LIP6, Sorbonne Université)

December, 9 2021 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

One block quantifier elimination over the reals: algorithms, complexity and applications

Quantifier elimination over the reals is one of the central algorithmic problem in effective real algebraic geometry. Given a quantified semi-algebraic formula, it consists in computing a logically equivalent quantifier-free formula.
In this task, I will describe a new efficient algorithm for one block quantifier elimination under some mild assumptions. This computes semi-algebraic formulas which describe a dense subset of the interior of the projection of a given real algebraic set on a subset of coordinates. Interestingly, the output formula is encoded through a determinantal representation which make them easy to evaluate.
Under some genericity assumptions, we use the theory of Groebner bases to prove that our algorithm runs in time \(O(8^tD^{4nt})\) where \(D\) bounds the degree of the input variables, \(n\) is the number of quantified variables and \(t\) the number of parameters. Note that, by contrast with the classical Cylindrical Algebraic Decomposition algorithm (which is doubly exponential in \(n\) and \(t\)), our algorithm runs in time singly exponential.
We report on practical experiments and show that our implementation outperforms the state-of-the-art software.
Our algorithm relies on properties of Groebner bases, specialization properties of Hermite quadratic forms (a tool for real root counting) and new algorithms for solving zero-dimensional parametric systems.
Next, we show how to apply our contributions to compute the totally real hyperplane sections coming from the theory of real algebraic curves.
This talk gathers joint works with Dimitri Manevich, Daniel Plaumann and Mohab Safey El Din.

Jan Verschelde (Dpt of Mathematics, Statistics and Computer, University of Illinois at Chicago, U.S.A.)

November, 18 2021 at 15h00, online seminar

Least Squares on GPUs in Multiple Double Precision

Graphics Processing Units (GPUs) are well suited to offset the cost overhead caused by multiple double precision. Code generated by the CAMPARY software is applied to accelerate the solving of linear systems in the least squares sense in double double, quad double, and octo double precision. Thanks to the high Compute to Global Memory Access (CGMA) ratios of multiple double arithmetic, teraflop performance is already attained running the double double Householder QR on 1,024-by-1,024 matrices, on the NVIDIA P100 and the V100 GPUs. In doubling the precision from double double to quad double and from quad double to octo double, the observed cost overhead factors are lower than the factors predicted by the arithmetical operation counts.

Pierre Briaud (Inria Paris/Sorbonne Université)

October, 14 2021 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

A polynomial time key-recovery attack on the Sidon cryptosystem

The Sidon cryptosystem is a new multivariate encryption scheme based on the theory of Sidon spaces which was presented at PKC 2021. As is usual for this kind of schemes, its security relies on the hardness of solving particular instances of the MQ problem and of the MinRank problem. A nice feature of the scheme is that it enjoys a homomorphic property due the bilinearity of its public polynomials. Unfortunately, we will show that the Sidon cryptosystem can be broken by a polynomial time key-recovery attack. This attack relies on the existence of solutions to the underlying MinRank instance which lie in a subfield and which are inherent to the structure of the secret Sidon space. We prove that such solutions can be found in polynomial time. Our attack consists in recovering an equivalent key for the cryptosystem by exploiting these particular solutions, and this task can be performed very efficiently.
This is joint work with Jean-Pierre Tillich and Javier Verbel.

Michael Plum (Fakultät für Mathematik, Karlsruher Institut für Technologie, Karlsruhe, Deutschland)

October, 7 2021 at 10h15, online seminar

Computer-assisted existence and multiplicity proofs for semilinear elliptic boundary value problems

Many boundary value problems for semilinear elliptic partial differential equations allow very stable numerical computations of approximate solutions, but are still lacking analytical existence proofs. In this lecture, we propose a method which exploits the knowledge of a “good” numerical approximate solution, in order to provide a rigorous proof of existence of an exact solution close to the approximate one. This goal is achieved by a fixed-point argument which takes all numerical errors into account, and thus gives a mathematical proof which is not “worse” than any purely analytical one. A crucial part of the proof consists of the computation of eigenvalue bounds for the linearization of the given problem at the approximate solution. The method is used to prove existence and multiplicity statements for several specific examples, including cases where purely analytical methods had not been successful.

Alice Pellet–Mary (CNRS, LFANT, Institut de mathématiques de Bordeaux, Université de Bordeaux)

September, 9 2021 at 10h15, online seminar and room M7-315 (3rd floor, Monod)

On the hardness of the NTRU problem

The NTRU problem is an algorithmic problem over structured lattices that was introduced by Hoffstein, Pipher, and Silverman more than 20 years ago, and which has been used to construct various cryptographic primitives. However, its relation to other lattice problems is still not well understood.
In this talk, we will describe different variants of the NTRU problem, and study how they compare to each other (and to other more classical lattice problems) in terms of reductions.
More precisely, we will show that a search variant of the NTRU problem is at least as hard as the shortest vector problem (SVP) in ideal lattices; and that the decisional variant of NTRU is at least as hard as another search variant of NTRU. Unfortunately, the two search variants of NTRU that are considered in these reductions do not match, meaning that we cannot combine the reductions in order to obtain a reduction from the ideal shortest vector problem to the decisional NTRU problem.
This is a joint work with Damien Stehlé.

2020-2021

Richard Peng (Georgia Institute of Technology)

July 8, 2021 at 14h00, online seminar

Solving Sparse Linear Systems Faster than Matrix Multiplication

Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph structured linear systems, in the general setting, the bit complexity of solving an \(n\)-by-\(n\) linear system \( Ax=b \) is \( n^\omega\), where \( \omega<2.372864 \) is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly(\(n\)) condition number.
We present an algorithm that solves linear systems in sparse matrices asymptotically faster than matrix multiplication for any \(\omega>2\). This speedup holds for any input matrix A with \(o(n^{\omega -1}/\log(\kappa(A)))\) non-zeros, where \(\kappa(A)\) is the condition number of \(A\). For poly(\(n\))-conditioned matrices with \(O(n)\) nonzeros, and the current value of \omega, the bit complexity of our algorithm to solve to within any \(1/\textrm{poly}(n)\) error is \(O\left(n^{2.331645}\right)\).
Our algorithm can be viewed as an efficient randomized implementation of the block Krylov method via recursive low displacement rank factorizations. It is inspired by the algorithm of [Eberly-Giesbrecht-Giorgi-Storjohann-Villard ISSAC `06 `07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to bound the smallest eigenvalue and the smallest gap in eigenvalues of semi-random matrices.
Joint work with Santosh Vempala, manuscript at https://arxiv.org/abs/2007.10254.

Sheehan Olver (Department of Mathematics, Imperial College London, United Kingdom)

July 1, 2021 at 10h15, online seminar

Infinite Linear Algebra + Interval Arithmetic

This talk is designed as a bit of a mathematical magic show: we will live demo Julia packages that allow the construction of infinite-dimensional vectors and matrices, infinite matrix factorisation, solving infinite-dimensional linear systems, and even computation of spectrum. This methodology is also applicable to solving linear ordinary and partial differential equations and singular integral equations. The “grand reveal” is that many of the techniques also work with interval arithmetic, almost for free. One can, for example, compute interval bounds on zeros of Bessel functions by simply specifying the 3-term recurrence they satisfy. Peeling back the curtains, we will describe how underlying structure in the operators makes this possible, but also remaining gaps that mean that the leap to interval arithmetic is not quite as rigorous as it appears to be, and can in certain cases fail spectacularly.

Shweta Agrawal (C.S.E. department, I.I.T. Madras, India)

June 24, 2021 at 10h15, online seminar

Deniable Fully Homomorphic Encryption from LWE

We define and construct Deniable Fully Homomorphic Encryption based on the Learning With Errors (LWE) polynomial hardness assumption. Deniable FHE enables storing encrypted data in the cloud to be processed securely without decryption, maintaining deniability of the encrypted data, as well the prevention of vote-buying in electronic voting schemes where encrypted votes can be tallied without decryption.
Our constructions achieve compactness independently of the level of deniability- both the size of the public key and the size of the ciphertexts are bounded by a fixed polynomial, independent of the faking probability achieved by the scheme. This is in contrast to all previous constructions of deniable encryption schemes (even without requiring homomorphisms) which are based on polynomial hardness assumptions, originating with the seminal work of Canetti, Dwork, Naor and Ostrovsky (CRYPTO 1997) in which the ciphertext size grows with the inverse of the faking probability. Canetti et al. argued that this dependence “seems inherent”, but our constructions illustrate this is not the case. The running time of our encryption algorithm depends on the inverse of the faking probability, thus the scheme falls short of achieving simultaneously compactness, negligible deniability and polynomial encryption time. Yet, we believe that achieving compactness is a fundamental step on the way to achieving all properties simultaneously as has been the historical journey for other primitives such as functional encryption.

Interestingly, we note that our constructions support large message spaces, whereas previous constructions were bit by bit, and can be run in online-offline model of encryption, where the bulk of computation is independent of the message and may be performed in an offline pre-processing phase. The running time of the online phase, is independent of the faking probability, whereas the offline encryption run-time grows with the inverse of the faking probability. At the heart of our constructions is a new way to use bootstrapping to obliviously generate FHE ciphertexts so that it supports faking under coercion.

Joint work with Saleet Mossel and Shafi Goldwasser

Guillaume Melquiond (Toccata, INRIA Paris Saclay)

June 17, 2021 at 10h15, online seminar

Plotting in a formally verified way

An invaluable feature of computer algebra systems is their ability to plot the graph of functions. Unfortunately, when one is trying to design a library of mathematical functions, this feature often falls short, producing incorrect and potentially misleading plots, due to accuracy issues inherent to this use case. This work investigates what it means for a plot to be correct and how to formally verify this property. The Coq proof assistant has thus been turned into a tool for plotting function graphs that are guaranteed to be correct, by using reliable polynomial approximations. This feature is provided as part of the CoqInterval library.

Joel Dahne (Uppsala University, Sweden)

May 20, 2021 at 10h15, online seminar

A computer assisted counterexample to Payne’s nodal line conjecture with few holes

Payne conjectured in 1967 that the nodal line of the second Dirichlet eigenfunction must touch the boundary of the domain. In their 1997 breakthrough paper, Hoffmann-Ostenhof, Hoffmann-Ostenhof and Nadirashvili proved this to be false by constructing a counterexample in the plane with an unspecified, but large, number of holes and raised the question of the minimum number of holes a counterexample can have. In this talk I will present a computer assisted counter example with 6 holes.
It’s joint work with both Javier Gómez-Serrano and Kimberly Hou.

Martin Albrecht (Information Security Group at Royal Holloway, University of London)

May 6, 2021 at 10h15, online seminar

On Bounded Distance Decoding with Predicate: Breaking the “Lattice Barrier” for the Hidden Number Problem

Lattice-based algorithms in cryptanalysis often search for a target vector satisfying integer linear constraints as a shortest or closest vector in some lattice. In this talk, I will discuss that these formulations may discard non-linear information from the underlying application that can be used to distinguish the target vector even when it is far from being uniquely close or short.
I will then discuss how we can formalise lattice problems augmented with a predicate distinguishing a target vector and give algorithms for solving instances of these problems. I will also describe the application of our techniques to lattice-based approaches for solving the Hidden Number Problem, a popular technique for recovering secret DSA or ECDSA keys in side-channel attacks, and discuss how our algorithms succeed in recovering the signing key for instances that were previously believed to be unsolvable using lattice approaches.
This talk is based on joint work with Nadia Heninger available at https://github.com/malb/bdd-predicate.

Benjamin Wesolowski (CNRS, LFANT, Institut de mathématiques de Bordeaux, Université de Bordeaux)

March 11, 2021 at 10h15, online seminar

SQISign: Compact Post-Quantum Signature from Quaternions and Isogenies

We will present the signature scheme SQISign, (for Short Quaternion and Isogeny Signature) exploiting isogeny graphs of supersingular elliptic curves. Its main asset: the signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. On the other hand, its efficient implementation and security analysis present new research challenges.

Xavier Caruso (CNRS, LFANT, Institut de mathématiques de Bordeaux, Université de Bordeaux)

February 25, 2021 at 10h15, online seminar

Points de vue sur la p-courbure

Cet exposé sera consacré à un invariant fondamental des équations différentielles linéaires en caractéristique positive : la p-courbure. Je présenterai plusieurs points de vue complémentaires sur cet objet qui, mis ensemble, permettront d’appréhender la complexité de cette notion et les différents rôles qu’elle est appelée à jouer vers la compréhension de l’arithmétique des équations différentielles aussi bien en caractéristique positive qu’en caractéristique nulle. Je m’attarderai également sur les aspects algorithmiques en expliquant comment certains descriptions de la p-courbure conduisent naturellement à des méthodes de calcul efficaces de celle-ci.

Marc Mezzarobba (CNRS, Péquan, LIP6)

January 21, 2021 at 10h15, online seminar

Analyse d’erreur de récurrences linéaires à l’aide de séries génératrices

Lorsque l’on calcule itérativement les termes d’une suite récurrente linéaire (à coefficients variables) en arithmétique approchée, il est bien connu que les erreurs d’arrondi « se compensent » au lieu de simplement d’accumuler. Pour obtenir une borne réaliste sur l’erreur dont est entaché le n-ième terme, il faut absolument prendre en compte ce phénomène, et donc étudier finement la propagation d’une erreur d’arrondi à travers les itérations suivantes de la récurrence. Dans les notations classiques d’analyse réelle, cela conduit assez vite à des manipulations assez inextricables de sommes emboîtées.
Les combinatoriciens confrontés à ce genre de difficultés savent bien que coder une suite par sa série génératrice rend souvent les calculs plus digestes, tout en donnant accès à de puissants outils analytiques. Dans cet exposé, je montrerai à travers quelques exemples comment ce même langage des séries génératrices permet de mener à bien l’analyse d’erreur d’algorithmes numériques fondés sur divers types de récurrences linéaires.

Mohab Safey El Din (LIP6, Sorbonne Université)

January 14, 2021 at 10h15, online seminar

Polynomial system solving, kinematic singularities and eight connected components

We investigate the kinematic singularity analysis of families of industrial robots. We show how these boil down to difficult algorithmic problems such as answering connectivity queries in semi-algebraic sets and solving positive dimensional polynomial systems with parameters. We describe an algorithm, combining techniques of real algebraic geometry with computer algebra for solving these problems. Finally, we illustrate how it has been used on the “Universal Robots” series to “prove” that the complementary of their kinematic singularities is the union of eight connected components.   

 

This is joint work with Jose Capco and Josef Schicho.

Stephen Melczer (Department of Combinatorics and Optimization, University of Waterloo, Canada)

December 17, 2020 at 14h30, online seminar

Analytic Combinatorics, Rigorous Numerics, and Uniqueness of Biomembranes

Since the invention of the compound microscope in the early seventeenth century, scientists have marvelled over red blood cells and their surprising shape. An influential model of Canham predicts the shapes of blood cells and similar biomembranes come from a variational problem minimizing the “bending energy” of these surfaces. Because observed (healthy) cells have the same shape in humans, it is natural to ask whether the model admits a unique solution. Here, we prove solution uniqueness for the genus one Canham problem. The proof builds on a result of Yu and Chen that reduces solution uniqueness to proving non-negativity of a sequence defined by an explicit linear recurrence relation with polynomial coefficients. We combine rigorous numeric analytic continuation of D-finite functions with classic bounds from singularity analysis to derive an effective index where the asymptotic behaviour of the sequence, which is positive, dominates the sequence behaviour. Positivity of the finite number of remaining terms can then be checked computationally.
This is joint work with Marc Mezzarobba.

Alin Bostan (SpecFun, INRIA Saclay Île-de-France)

December 10, 2020 at 10h15, online seminar

A Simple and Fast Algorithm for Computing the N-th Term of a Linearly Recurrent Sequence

We present a simple and fast algorithm for computing the N-th term of a given linearly recurrent sequence. Our new algorithm uses O(M(d) log N) arithmetic operations, where d is the order of the recurrence, and M(d) denotes the number of arithmetic operations for computing the product of two polynomials of degree d. The state-of-the-art algorithm, due to Fiduccia (1985), has the same arithmetic complexity up to a constant factor. Our algorithm is simpler, faster and obtained by a totally different method. We also discuss several algorithmic applications, notably to polynomial modular exponentiation and powering of matrices. Joint work with Ryuhei Mori (Tokyo Institute of Technology, Japan).

Léo Ducas (CWI, Amsterdam, Netherlands)

November 26, 2020 at 10h15, online seminar

An Algorithmic Reduction Theory for Binary Codes: LLL and more

Lattice reduction is the task of finding a basis of short and somewhat orthogonal vectors of a given lattice. In 1985 Lenstra, Lenstra and Lovasz proposed a polynomial time algorithm for this task, with an application to factoring rational polynomials. Since then, the LLL algorithm has found countless application in algorithmic number theory and in cryptanalysis.
There are many analogies to be drawn between Euclidean lattices and linear codes over finite fields. In this work, we propose to extend the range of these analogies by considering the task of reducing the basis of a binary code. In fact, all it takes is to choose the adequate notion mimicking Euclidean orthogonality (namely orthopodality), after which, all the required notions, arguments, and algorithms unfold before us, in quasi-perfect analogy with lattices.
This is joint work with Thomas Debris-Alazard and Wessel van Woerden.

Jean-Michel Muller (AriC, LIP, ENS Lyon)

November 12, 2020 at 10h15, online seminar

Quick and dirty: immediately obtaining a rough approximation to \( \sin(x), \log(x), \arctan(y/x), \sqrt{x}\)…

We review some of the classical methods used for very quickly obtaining low-precision approximations to the elementary functions: shift-and-add algorithms, table-based methods, and “bit-manipulation” techniques.

Gleb Pogudin (MAX, LIX, École Polytechnique)

November 6, 2020 at 10h15, online seminar

Global parameter identifiability of ODE models

Many real-world processes and phenomena are modeled using systems of parametric ODEs. One might be interested in knowing the values of some select parameters due to their importance. Usually one tries to determine (uniquely identify) them by collecting input and output data. However, due to the structure of the model, it can be impossible to determine the parameters from input and output data. When this happens, the parameters are said to be “not globally identifiable”. In the process of model design, it is crucial to know whether the parameters of interest in a potential model are globally identifiable.
I will present several recent algorithms for analyzing parameter identifiability based on computational algebra.

2019-2020

Pierre Lairez (SpecFun, INRIA Saclay – Île-de-France)

June 25, 2020 at 10h15, online seminar

Calcul symbolique-numérique d’intégrales de volumes

Comment calculer le volume du lieu où un polynôme multivarié prend des valeurs positives ? Surtout, comment le faire mieux que les méthodes Monte-Carlo, qui procèdent par échantillonage ? Je montrerais trois méthodes.
La première, due à Henrion, Lasserre et Sarvognan, dans une variante due à Jasour, Hofmann et Williams, utilise une formulation en termes de moments pour ramener le calcul du volume à une suite de problèmes d’optimisation convexe.
La seconde, introduite par Mezzarobba, Safey El Din et moi-même, utilise l’intégration symbolique pour ramener le calcul du volume à la résolution numérique d’une équation différentielle linéaire.
Enfin, la troisième, encore en cours d’élaboration, par Berthomieu, Mezzarobba, Safey El Din et moi-même, combine les deux précédentes.

Tristan Vaccon (XLIM, Université de Limoges)

June 4 , 2020 at 10h15, online seminar

\( p \)-adic precision, examples and applications

\(p\)-adic numbers can usually only be handled with finite precision, which yields the problems of determining the smallest precision needed for a computation or the loss of precision per operation.
With X. Caruso and D. Roe, we have provided a method to handle precision over \(p\)-adics that relies on differentials and first-order approximation. It provides results that are (essentially) optimal and do not depend on the choice of algorithm.
We will present various illustrations of this technique: computation of determinants, characteristic polynomials, \(p\)-adic differential equations,etc…
We will also present a Sagemath implementation to compute automatically the optimal precision on a given computation.

Pascal Koiran (MC2, LIP) and Bruno Salvy (AriC, LIP)

March 12, 2020 at 10h15, Amphi B (3rd floor, Monod)

Bruno Salvy: Absolute root separation

The absolute separation of a polynomial is the minimum nonzero difference between the absolute values of its roots. In the case of polynomials with integer coefficients, it can be bounded from below in terms of the degree and the height (the maximum absolute value of the coefficients) of the polynomial. We improve the known bounds for this problem and related ones. Then we report on extensive experiments in low degrees, suggesting that the current bounds are still very pessimistic.
This is joint work with Yann Bugeaud, Andrej Dujella, Wenjie Fang and Tomislav Pejkovic.   

 

Pascal Koiran: Root separation for trinomials

The talk will be based on https://arxiv.org/abs/1709.03294. We give a separation bound for the complex roots of a trinomial \(f\) with integer coefficients. The logarithm of the inverse of our separation bound is polynomial in the size of the sparse encoding of \(f\); in particular, it is polynomial in \(\log(\deg f)\). It is known that no such bound is possible for 4-nomials (polynomials with 4 monomials). For trinomials, the classical results (which are based on the degree of \(f\) rather than the number of monomials) give separation bounds that are exponentially worse.
As an algorithmic application, we show that the number of real roots of a trinomial \(f\) can be computed in time polynomial in the size of the sparse encoding of \(f\). The same problem is open for 4-nomials.

Robin Larrieu (LMV, Université de Versailles)

February 13, 2020 at 10h15, room M7-315 (3rd floor, Monod)

Fast polynomial reduction for generic bivariate ideals

Let \( A, B \) in \(K[X,Y]\) be two bivariate polynomials over an effective field \(K\), and let \(G\) be the reduced Gröbner basis of the ideal \(I := ⟨A,B⟩\) generated by \(A\) and \(B\), with respect to some weighted-degree lexicographic order. Under some genericity assumptions on \(A, B,\) we will see how to reduce a polynomial with respect to \(G\) with quasi-optimal complexity, despite the fact that \(G\) is much larger than the intrinsic complexity of the problem. For instance, if \(A, B\) have total degree \(n\), that is \(O(n^2)\) coefficients, then \(G\) has \(O(n^3)\) coefficients but reduction can be done in time \( O(n^2)\).
We will consider two situations where these new ideas apply, leading to different algorithms:   

 

  • First, there is a class called “vanilla Gröbner bases” for which there is a so-called terse representation that, once precomputed, allows to reduce any polynomial \( P \) in time \( O(n^2)\). In this setting, assuming suitable precomputation, multiplication and change of basis can therefore be done in time \( O(n^2)\) in the quotient algebra \(K[X,Y] / ⟨A,B⟩\).
  • Then, we assume that \( A \) and \( B \) are given in total degree and we consider the usual degree lexicographic order. Although the bases are not vanilla in this case, they admit a so-called concise representation with similar properties. Actually, the precomputation can also be done efficiently in this particular setting: from the input \(A, B\), one can compute a Gröbner basis in concise representation in time \( O(n^2)\). As a consequence, multiplication in \( K[X,Y] / ⟨A,B⟩\) can be done in time \( O(n^2) \) including the cost of precomputation.

Laurence Rideau (STAMP, INRIA Sophia Antipolis – Méditerranée)

January 30, 2020 at 10h15, room M7-315 (3rd floor, Monod)

Formalisation in Coq of the correctness of double-word arithmetic algorithms and their errors bounds

This talk presents the formalisation in Coq of the article Tight and rigourous error bounds for basic building blocks of double-word arithmetic by M. Joldes, J.M. Muller and V. Popescu.
We show how this formalisation made it possible to highlight some errors and some inaccuracies in the proofs of the paper.
I will focus in particular on the dangers of the “wlog”, which is used extensively in this type of proofs.
We will also discuss the advantages and disadvantages of such formalization, and how this work has improved confidence in the results of the article, despite the errors detected, and has also improved the Flocq library (intensively used for it).

Miruna Rosca (AriC, LIP)

December 5, 2019 at 10h15, room M7-315 (3rd floor, Monod)

MPSign: A Signature from Small-Secret Middle-Product Learning with Errors

We describe a digital signature scheme MPSign, whose security relies on the conjectured hardness of the Polynomial Learning With Errors problem (PLWE) for at least one defining polynomial within an exponential-size family (as a function of the security parameter). The proposed signature scheme follows the Fiat-Shamir framework and can be viewed as the Learning With Errors counterpart of the signature scheme described by Lyubashevsky at Asiacrypt 2016, whose security relies on the conjectured hardness of the Polynomial Short Integer Solution (PSIS) problem for at least one defining polynomial within an exponential-size family. As opposed to the latter, MPSign enjoys a security proof from PLWE that is tight in the quantum-access random oracle model. The main ingredient is a reduction from PLWE for an arbitrary defining polynomial among exponentially many, to a variant of the MiddleProduct Learning with Errors problem (MPLWE) that allows for secrets that are small compared to the working modulus. We present concrete parameters for MPSign using such small secrets, and show that they lead to significant savings in signature length over Lyubashevsky’s Asiacrypt 2016 scheme (which uses larger secrets) at typical security levels. As an additional small contribution, and in contrast to MPSign (or MPLWE), we present an efficient key-recovery attack against Lyubashevsky’s scheme (or the inhomogeneous PSIS problem), when it is used with sufficiently small secrets, showing the necessity of a lower bound on secret size for the security of that scheme.
This is joint work with Shi Bai, Dipayan Das, Ryo Hiromasa, Amin Sakzad, Damien Stehlé, Ron Steinfeld and Zhenfei Zhang

Vincent Lefèvre (AriC, LIP)

November 21, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Accurate Complex Multiplication in Floating-Point Arithmetic

We deal with accurate complex multiplication in binary floating-point arithmetic, with an emphasis on the case where one of the operands is a “double-word” number. We provide an algorithm that returns a complex product with normwise relative error bound close to the best possible one, i.e., the rounding unit u. We also discuss variants of this algorithm.
This is a joint work with Jean-Michel Muller.

Fabien Laguillaumie (AriC, LIP)

November 14, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Threshold variant of the digital signature algorithm standard

(EC)DSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier’s cryptosystem.
In this talk, I will give some recent results to improve Lindell’s solution in terms of security and efficiency, and discuss some possible extension to a full threshold variant.
This is joint works with Guilhem Castagnos, Dario Catalano, Federico Savasta and Ida Tucker.

Mioara Joldes (CNRS LAAS, Toulouse)

November 7, 2019 at 10h15, room M7-315 (3rd floor, Monod)

An optimization viewpoint for machine-efficient polynomial approximations

Machine implementation of mathematical functions often relies on polynomial approximations. The particularity is that rounding errors occur both when representing the polynomial coefficients on a finite number of bits, and when evaluating it in finite precision. Hence, for finding the best polynomial (for a given fixed degree, norm and interval), one has to take into account both types of errors: approximation and evaluation. By considering a linearized evaluation error model, we formulate a semi-infinite linear optimization problem, whose solution can be obtained by an iterative exchange algorithm. This can be seen as an extension of the Remez algorithm. A discussion of the obtained results and further developments concludes this talk. This is joint work with F. Bréhard and D. Arzelier.

Geoffroy Couteau (CNRS and Paris 7)

October 24, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Efficient Pseudorandom Correlation Generators: Silent OT Extension and More

Secure multiparty computation (MPC) often relies on sources of correlated randomness for better efficiency and simplicity. This is particularly useful for MPC with no honest majority, where input-independent correlated randomness enables a lightweight “non-cryptographic” online phase once the inputs are known. However, since the amount of correlated randomness typically scales with the circuit size of the function being computed, securely generating correlated randomness forms an efficiency bottleneck, involving a large amount of communication and storage. A natural tool for addressing the above limitations is a pseudorandom correlation generator (PCG).
A PCG allows two or more parties to securely generate long sources of useful correlated randomness via a local expansion of correlated short seeds and no interaction. PCGs enable MPC with silent preprocessing, where a small amount of interaction used for securely sampling the seeds is followed by silent local generation of correlated pseudorandomness.
A concretely efficient PCG for Vector-OLE correlations was recently obtained by Boyle et al. (CCS 2018) based on variants of the learning parity with noise (LPN) assumption over large fields. In this work, we initiate a systematic study of PCGs and present concretely efficient constructions for several types of useful MPC correlations. We obtain the following main contributions:
– PCG foundations. We give a general security definition for PCGs. Our definition suffices for any MPC protocol satisfying a stronger security requirement that is met by existing protocols. We prove that a stronger security requirement is indeed necessary, and justify our PCG definition by ruling out a stronger and more natural definition.
– Silent OT extension. We present the first concretely efficient PCG for oblivious transfer correlations. Its security is based on a variant of the binary LPN assumption and any correlation-robust hash function. We expect it to provide a faster alternative to the IKNP OT extension protocol (Crypto ’03) when communication is the bottleneck. We present several applications, including protocols for non-interactive zero-knowledge with bounded-reusable preprocessing from binary LPN, and concretely efficient related-key oblivious pseudorandom functions.
– PCGs for simple 2-party correlations. We obtain PCGs for several other types of useful 2-party correlations, including (authenticated) one-time truth-tables and Beaver triples. While the latter PCGs are slower than our PCG for OT, they are still practically feasible. These PCGs are based on a host of assumptions and techniques, including specialized homomorphic secret sharing schemes and pseudorandom generators tailored to their structure.
– Multiparty correlations. We obtain PCGs for multiparty correlations that can be used to make the circuit-dependent communication of MPC protocols scale linearly (instead of quadratically) with the number of parties.

Benjamin Wesolowski (CWI Amsterdam)

October 10, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Discrete logarithms in quasi-polynomial time in finite fields of small characteristic

We prove that the discrete logarithm problem can be solved in quasi-polynomial expected time in the multiplicative group of finite fields of fixed characteristic. In 1987, Pomerance proved that this problem can be solve in expected subexponential time \(L(1/2)\). The following 30 years saw a number of heuristic improvements, but no provable results. The quasi-polynomial complexity has been conjectured to be reachable since 2013, when a first heuristic algorithm was proposed by Barbulescu, Gaudry, Joux, and Thomé. We prove this conjecture, and more generally that this problem can be solved in the field of cardinality \( p^n \) in expected time \((pn)^{2 \log_2 (n)+O(1)}\).

Théo Mary (University of Manchester, U.K.)

October 3, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Sharper and smaller error bounds for low precision scientific computing

With the rise of large scale, low precision computations, numerical algorithms having a backward error bound of the form \( nu \), for a problem size \( n \) and a machine precision \( u \), are no longer satisfactory. Indeed, with half precision arithmetic, such algorithms cannot guarantee even a single correct digit for problems larger than a few thousands. This has created a need for error bounds that are both sharper and smaller. In this talk, we will discuss recent advances towards this double goal. We will present new error analyses to obtain probabilistic bounds that are sharper on average, and new algorithms that achieve much smaller bounds without sacrificing high performance.

2018-2019

Miruna Rosca (AriC)

July 4, 2019 at 10h15, room M7-315 (3rd floor, Monod)

On the Middle-Product Learning With Errors Problem and its applications in cryptography

In this talk, we introduce a new variant MP-LWE of the Learning With Errors problem (LWE) making use of the middle product between polynomials modulo an integer q, we exhibit a reduction from the Polynomial-LWE problem (PLWE) parametrized by a polynomial f, to MP-LWE, which works for a large family of polynomials, and we analyze the applications of MP-LWE in cryptography.
This is joint work with A. Sakzad, D. Stehlé and R. Steinfeld.

François Morain (LIX, École polytechnique)

June 13, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Fractions continues avec des algorithmes rapides

Les fractions continues sont un outil très important pour l’approximation des nombres réels, avec de nombreuses applications en théorie algorithmique des nombres ainsi qu’en cryptanalyse. Une des applications importantes est l’algorithme de Cornacchia qui résout élégamment le problème de la représentation des nombres premiers sous la forme p=x^2+d y^2$ avec d > 0. Cet exposé présentera l’utilisation des algorithmes rapides de pgcd d’entiers pour fournir une version rapide de l’algorithme de Cornacchia.

Vincent Neiger (XLIM, Université de Limoges)

May 2, 2019 at 10h15, room M7-315 (3rd floor, Monod)

On the complexity of modular composition of generic polynomials

This talk is about algorithms for modular composition of univariate polynomials, and for computing minimal polynomials. For two univariate polynomials \(a\) and \(g\) over a commutative field, modular composition asks to compute \(h(a) \bmod g\) for some given \(h\), while the minimal polynomial problem is to compute \(h\) of minimal degree such that \( h(a) = 0 \bmod g\). For generic \(g\) and \(a\), we propose algorithms whose complexity bound improves upon previous algorithms and in particular upon Brent and Kung’s approach (1978); the new complexity bound is subquadratic in the degree of \(g\) and \(a\) even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals, and from efficient operations with these bases thanks to fast univariate polynomial matrix algorithms. We will also report on software development and comment on implementation results for the main building blocks in our composition algorithm.
Contains joint work with Seung Gyu Hyun, Bruno Salvy, Eric Schost, Gilles Villard.

Bruno Grenet (ECO, LIRMM)

April 11, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Multiplications polynomiales sans mémoire

Le problème de la multiplication de polynômes a été très étudié depuis les années 1960, et différents algorithmes ont été proposés pour atteindre les meilleures complexités en temps.
Plus récemment, certains de ces algorithmes ont été étudiés du point de vue de leur complexité en espace, et modifiés pour n’utiliser aucun espace supplémentaire autre que les entrées et sorties, tout en gardant la même complexité en temps asymptotiquement.
Dans ce travail, nous étendons ces résultats de deux façons. D’une part, nous nous demandons si tout algorithme de multiplication polynomiale admet une variante « en place », c’est-à-dire n’utilisant aucun espace supplémentaire, de manière générique. D’autre part, nous considérons deux variantes importantes de ce problème qui ne produisent qu’une partie du résultat, les produits dits court et médian, et nous nous demandons si ces opérations peuvent également être effectuées en place.
Pour répondre de manière (essentiellement) affirmative à ces deux questions, nous proposons une série de réductions ayant comme point de départ n’importe quel algorithme de multiplication de complexité en espace linéaire. Pour le produit complet et le produit court, ces réductions fournissent des variantes en place des algorithmes avec la même complexité en temps asymptotiquement. Pour le produit médian, la réduction implique un facteur logarithmique supplémentaire dans la complexité en temps, quand celle-ci est quasi-linéaire.
Travail en commun avec Pascal Giorgi et Daniel Roche

Damien Stehlé (AriC)

April 4, 2019 at 10h15, room M7-315 (3rd floor, Monod)

A survey on security foundations of fast lattice-based cryptography

The Learning With Errors problem (LWE) captures the asymptotic hardness of some standard lattice problems, and enables the design of cryptographic schemes. However, these LWE-based schemes are relatively inefficient.
To address this issue, algebraic variants of LWE have been introduced, such as Polynomial-LWE, Ring-LWE, Module-LWE and MiddleProduct-LWE, whose definitions involve polynomial rings and number fields.
In this talk, I will survey the state of the art on these problems.

Jean-Michel Muller (AriC)

March 7, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Error analysis of some operations involved in the Fast Fourier Transform

We are interested in obtaining error bounds for the classical FFT algorithm in floating-point arithmetic, for the 2-norm as well as for the infinity norm. For that purpose we also give some results on the relative error of the complex multiplication by a root of unity, and on the largest value that can take the real or imaginary part of one term of the FFT of a vector x, assuming that all terms of x have real and imaginary parts less than some value b.
This is a joint work with N. Brisebarre, M. Joldes, A.-M. Nanes and J. Picot.

Assia Mahboubi (Gallinette, INRIA Rennes – Bretagne Atlantique, LS2N)

February 14, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Formally Verified Approximations of Definite Integrals

In this talk, we discuss the problem of computing values of one-dimensional definite integrals, with the highest possible guarantee of correctness. Finding an elementary form for an antiderivative is often not an option, and numerical integration is a common tool when it comes to making sense of a definite integral. Some of the numerical integration methods can even be made rigorous: not only do they compute an approximation of the integral value but they also bound its inaccuracy. These rigorous algorithms are implemented in software like INTLAB, VNODE-LP, Arb, etc. But the highest possible guarantee of correctness on such approximations, even those obtained by rigorous means, would in fact be provided by a formal proofs, machine-checked using a proof assistant. Proof assistants are pieces of software for representing mathematical definitions, statements, algorithms and proofs in a digital form, suitable for computer processing. In particular, they can be used to devise formal-proof-producing implementations of programs. But numerical integration is still missing from the toolbox when performing formal proofs. We thus describe and discuss a method for automatically computing and proving bounds on some definite integrals, implemented inside the Coq proof assistant.
This is a joint work with Guillaume Melquiond and Thomas Sibut-Pinote.

Ida Tucker (AriC)

February 7, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Practical fully secure unrestricted inner product functional encryption modulo a prime p

Functional encryption (FE) is an advanced cryptographic primitive which allows, for a single encrypted message, to finely control how much information on the encrypted data each receiver can recover. To this end many functional secret keys are derived from a master secret key. Each functional secret key allows, for a ciphertext encrypted under the associated public key, to recover a specific function of the underlying plaintext.
However constructions for general FE are far from practical, or rely on non-standard and ill-understood cryptographic assumptions.
In this talk I will focus on the construction of efficient FE schemes for linear functions (i.e. the inner product functionality), and the framework in which our constructions hold. Such schemes yield many practical applications, and our constructions are the first FE schemes for inner products modulo a prime that are both efficient and recover the result whatever its size. I will also describe an instantiation of the framework in using class groups of imaginary quadratic fields.
This is a joint work with Guilhem Castagnos and Fabien Laguillaumie.

Éric Goubault (LIX, École Polytechnique)

January 24, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Finding Positive Invariants of Polynomial Dynamical Systems – some experiments

Synthetising positive invariants of non-linear ODEs, switched systems or even hybrid systems is a hard problem that has many applications, from control to verification. In this talk, I will present two « exercices de style » for dealing with it, revisiting the classical Lyapunov function approach. The first one is based on algebraic properties of polynomial differential systems (Darboux polynomials, when they exist), for finding polynomial, rational or even some log extensions to rational functions whose level sets or sub-level sets describe positive invariants of these systems, or provide interesting « change of bases » for describing their solutions. The second one is based on topological properties (Wazewski property, mostly) which ensure the existence, in some region of the state space, of a non-empty maximal invariant set. The interest is that there is then in general no need to find complicated functions for precisely describing the invariant set itself, instead we rather use simple template shapes in which a possibly very complicated invariant set lies. The topological criterion can be ensured by suitable SoS relaxations, for polynomial differential systems, that can be implemented using LMI solvers.

Alain Passelègue (AriC)

January 17, 2019 at 10h15, room M7-315 (3rd floor, Monod)

New candidate pseudorandom functions and their applications

In this talk, I will present new and simple candidate pseudorandom functions (PRFs) introduced in a recent work. In this work, we depart from the traditional approaches for building PRFs used in provable security or in applied cryptography by exploring a new space of plausible PRF candidates. Our guiding principle is to maximize simplicity while optimizing complexity measures that are relevant to advanced cryptographic applications. Our primary focus is on weak PRFs computable by very simple circuits (depth-2 ACC circuits).
The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw many natural connections between their hardness and questions in complexity theory or learning theory. On the applied side, the piecewise-linear structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). In particular, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity compared to protocols obtained by combining standard MPC protocols with practical PRFs (included MPC-friendly ones).
Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We give a candidate EI-PRF based on our main weak PRF candidate.   

 

Joint work with Dan Boneh, Yuval Ishai, Amit Sahai, and David J. Wu, published at TCC 2018

Chee Yap (New-York University)

January 9, 2019 at 10h15, room M7-315 (3rd floor, Monod)

Subdivision Path Planning in Robotics: Theory and Practice

Motion planning is a fundamental problem in robotics. We propose to design path planners based on three foundations:
(1) The notion of “resolution-exact” planners. Conceptually, it avoids the zero problem of exact computation.
(2) The use of “soft predicates” for achieving such algorithms in the subdivision approach.
(3) The “feature-based technique” for constructing such soft predicates.
We formulate an algorithmic framework called “Soft Subdivision Search” (SSS) that incorporates these ideas. There are many parallels between our framework and the well-known Sampling or Probabilistic Roadmap framework. Both frameworks lead to algorithms that are
* practical
* easy to implement
* flexible and extensible
* with adaptive and local complexity
In contrast to sampling and previous resolution approaches, SSS confers strong theoretical guarantees, including halting.   

 

In a series of papers we demonstrated the power of these ideas, by producing planners for planar robots with 2, 3 and 4 degrees of freedom (DOF) that outperform or matches state-of-art sampling-based planners. Most recently, we produced a planner for two spatial robots (rod and ring) with 5 DOFs. Non-heuristic planners for such robots has been considered a challenge for the subdivision approach. We outline a general axiomatic theory underlying these results, including subdivision in non-Euclidean configuration spaces,

Joint work with Y.J. Chiang, C.H. Hsu, C. Wang, Z. Luo, B. Zhou, J.P. Ryan.

Elena Kirshanova (AriC)

December 13, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Practical sieving algorithms for the Shortest Vector Problem

In this talk I present recent results on sieving algorithms for the Shortest Vector Problem. First, I explain why this problem is important and how sieving algorithms work. Then, I present recent advances in memory efficient versions of sieving algorithms. I explain locality-sensitive techniques for these types of algorithms. The part of the talk is based on joint works with Gottfried Herold and Thijs Laarhoven. Finally, I present recent advances in practical aspects of sieving algorithm for SVP. I describe technical challenges that arise when one tries to make sieving algorithms practical, and how one can overcome some of them. This part of the talk is on-going work with Martin R. Albrecht, Leo Ducas, Gottfried Herold, Eamonn W. Postlethwaite, Marc Stevens.

Nicolas Brunie (Kalray)

December 6, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Overview of arithmetic at Kalray: metalibm and the rest

Kalray’s version of Metalibm “lugdunum” has recently been open sourced. It is an interesting tool to developp elementary functions. In this presentation we will present the tool and show how it can be used to explore the design space of a few elementary functions. Then we will present in more details how Metalibm is used at Kalray to developp both the next generation Hardware and the mathematical libraries through the example of CRC reduction and OpenCL-C (kernel code) elementary functions. Finally we will survey the arithmetic at Kalray outside Metalibm through a description of the next generation processor and what is envisioned for the future.

Sylvie Putot (LIX, École Polytechnique)

November 29, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Forward Inner-Approximated Reachability of Non-Linear Continuous Systems

We propose an approach for computing inner-approximations of reachable sets of dynamical systems defined by non-linear, uncertain, ordinary differential equations. This is a notoriously difficult problem, much more intricate than outer-approximations, for which there exist well known solutions, mostly based on Taylor models.  Our solution builds on rather inexpensive set-based methods, namely a generalized mean-value theorem combined with Taylor models outer-approximations of the flow and its Jacobian with respect to the uncertain inputs and parameters. The combination of such forward inner and outer Taylor-model based approximations can be used as a basis for the verification and falsification of properties of cyber-physical systems.

Radu Titiu (AriC and BitDefender)

November 22, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Adaptively secure PRFs from LWE

In distributed pseudorandom functions (DPRFs), a PRF secret key SK is secret shared among N servers so that each server can locally compute a partial evaluation of the PRF on some input X. A combiner that collects t partial evaluations can then reconstruct the evaluation F (SK, X) of the PRF under the initial secret key. So far, all non-interactive constructions in the standard model are based on lattice assumptions. One caveat is that they are only known to be secure in the static corruption setting, where the adversary chooses the servers to corrupt at the very beginning of the game, before any evaluation query. In this work, we construct the first fully non-interactive adaptively secure DPRF in the standard model. Our construction is proved secure under the LWE assumption against adversaries that may adaptively decide which servers they want to corrupt. We also extend our construction in order to achieve robustness against malicious adversaries.   

 

This is joint work with Benoit Libert and Damien Stehlé.

Martin Kumm (Uni. Kassel, Germany)

November 8, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Exact Computation of Monotonic Functions with Large Input Word Sizes using Look-Up Tables

The exact evaluation of arbitrary functions by using look-up tables (LUTs) is typically limited to small input word sizes. This is due to the fact that the storage requirements grow exponentially with the input word size \(N\) and linear with the output word size \(M\), i.e., \(O(2^N M)\). However, many applications require the computation of elementary functions with a large precision of the input argument but a lower precision of the result. One example is direct digital frequency synthesis (DDFS) with typically \(N=32..48\) bit and \(M=8..12\) bit. Another example are tone mapping methods for high-dynamic range (HDR) imaging with typ. \(N=16..19\) bit and \(M=8\) bit. In this talk, alternative architectures for evaluation of monotonic functions using LUTs are discussed which memory requirements scale linear with the input word size and exponentially with the output word size, i.e., \(O(2^M N)\). This is achieved by using \(M\) additional comparators or adders. First experimental results from FPGA synthesis show that this also translates to resource reductions for those applications where \(M\) is just larger than \(N\).

Silviu Filip (CAIRN, Inria Rennes Bretagne Atlantique)

October 25, 2018 at 10h15, room M7-315 (3rd floor, Monod)

A High Throughput Polynomial and Rational Function Approximations Evaluator

We present an automatic method for the evaluation of functions via polynomial or rational approximations and its hardware implementation, on FPGAs. These approximations are evaluated using Ercegovac’s iterative E-method adapted for FPGA implementation. The polynomial and rational function coefficients are optimized so that they satisfy the constraints of the E-method. We present several examples of practical interest; in each case a resource-efficient approximation is proposed and comparisons are made with alternative approaches.

Florent Bréhard (AriC)

October 18, 2018 at 10h15, room M7-315 (3rd floor, Monod)

Rigorous Numerics for Function Space Problems and Applications to Aerospace

A wide range of numerical routines exist for solving function space problems (like ODEs, PDEs, optimization, etc.). But in most cases, one lacks information about the reliability of the results, e.g., how many of the returned digits are correct. While most applications focus on efficiency, some safety-critical tasks, as well as computer assisted mathematics, need rigorous mathematical statements about the computed result such as automatic tight error bounds.   

 

A relevant practical example occurs in the spacecraft rendezvous problem, which consists in determining the optimal command law for a spacecraft equipped with thrusters to be transferred from its original orbit to a target orbit within a given time interval. Computing rigorous trajectories is of high interest to guarantee a posteriori the correctness of the numerical command law returned by the optimization algorithm used to solve this problem.

In this talk we discuss a rigorous framework called Chebyshev models to provide validated enclosures of real-valued functions defined over a compact interval. After having presented the basic arithmetic operations on them, we focus on an algorithm that computes validated solutions of linear ordinary differential equations, specifically, approximate truncated Chebyshev series together with a rigorous uniform error bound. The method relies on an a posteriori validation based on a Newton-like fixed-point operator, which also exploits the almost-banded structure of the problem in an efficient way. We provide an open-source C implementation (https://gforge.inria.fr/projects/tchebyapprox/).

Finally, certified enclosures of spacecraft trajectories arising in the rendezvous problem will be computed using the tools introduced during the talk.

Archives of the seminar

For seminars in previous years, see former AriC seminars.

[Logo CNRS] [Logo ENS de Lyon] [Logo Inria] [Logo LIP]
[Logo UCB Lyon 1] [Logo Université de Lyon] [Logo Labex MILYON] [Logo Fédération Informatique de Lyon]

AriC project – Arithmetic and Computing