# Next seminar

Bruno Salvy (AriC)

Date to be announced. At 10h15, room 116, Monod site

Diagonales: Combinatoire, Asymptotique et Calcul Formel

Les diagonales de fractions rationnelles sont une classe importante de séries entières, située entre les séries algébriques et les séries solutions d’équations différentielles linéaires, avec de fortes propriétés arithmétiques et analytiques. Leurs suites de coefficients apparaissent fréquemment en combinatoire, notamment comme sommes multiples de binomiaux, et en théorie des nombres, un exemple classique étant la suite introduite par Apéry pour sa preuve de l’irrationalité de zeta(3). L’analyse asymptotique de ces suites, dans des cas simples mais fréquents, peut être effectuée automatiquement en utilisant les méthodes de combinatoire analytique en plusieurs variables introduites par Pemantle et Wilson, combinées avec des techniques modernes de calcul formel. L’exposé donnera une introduction au contexte et à des résultats récents dans ce domaine.

# 2016-2017

Thomas Prest (Thales)

Thursday May 11, 2017 at 9h45 (please note the unusual time), room 116, Monod site

Easier floating-point on a Rényi day

This talk is part of the Monthly Lattice Meetings.

Paola Boito (XLIM, U. Limoges and AriC)

Thursday April 20, 2017 at 10h15, room 116, Monod site

Efficient Solution of Parameter Dependent Quasiseparable Systems

and Computation of Meromorphic Matrix Functions

In this work we focus on the solution of shifted quasiseparable systems and of more general parameter dependent matrix equations with quasiseparable representations. We propose an efficient algorithm exploiting the invariance of the quasiseparable structure under diagonal shifting and inversion. This algorithm is applied to compute various functions of matrices. Numerical experiments show the effectiveness of the approach.

This is joint work with Luca Gemignani (Pisa) and Yuli Eidelman (Tel Aviv).

Alin Bostan (Inria Saclay)

Thursday April 13, 2017 at 10h15, room 116, Monod site

Proving algorithmically the transcendence of D-finite power series

Given a sequence represented by a linear recurrence with polynomial coefficients and sufficiently many initial terms, a natural question is whether the transcendence of its generating function can be decided algorithmically. The question is non trivial even for sequences satisfying a recurrence of first order. An algorithm due to Michael Singer is sufficient, in principle, to answer the general case. However, this algorithm suffers from too high a complexity to be effective in practice. We will present a recent method that we have used to treat a non-trivial combinatorial example. It reduces the question of transcendence to a (structured) linear algebra problem.

Laurent Grémy (Loria)

Monday April 10, 2017 at 15h00, room 115, Monod site

Computing discrete logarithms: sieving for the number field sieve

Factoring large integers and computing discrete logarithms over finite fields are two difficult problems, efficiently solved by a family of algorithms, the index calculus. The security of some cryptosystems rely on this difficulty.

Since 1982 and the use of the quadratic sieve by Pomerance to factor large integers, it is well known that an important step of these algorithms, called relation collection, can be done quickly using sieving algorithms, basically like the sieve of Eratosthenes to find primes. Among the index calculus algorithms, the number field sieve (NFS) reaches the best complexity. It requires a two-dimensional sieving. In 2005, Franke and Kleinjung described the most efficient algorithm to sieve over two dimensions.

The diversity of finite fields implies that different variants of the NFS, as the one described by Joux–Lercier–Smart–Vercauteren in 2006 or by Kim–Barbulescu in 2016, need to sieve in higher dimensions, basically from 3 to 6. In this talk, we will describe three sieving algorithms in dimension three, some computational results of their implementations and finally an algorithm to sieve in any small dimension.

Jean-Michel Muller (AriC)

Thursday March 23, 2017 at 10h15, room 116, Monod site

Tight and rigorous error bounds for basic building blocks of double-word arithmetic

(Joint work with Mioara Joldes and Valentina Popescu)

We analyze several classical basic building blocks of double-word arithmetic (frequently called “double-double arithmetic” in the literature): the addition of a double-word number and a floating-point number, the addition of two double-word numbers, the multiplication of a double-word number by a floating-point number, the multiplication of two double-word numbers, the division of a double-word number by a floating-point number, and the division of two double-word numbers. For multiplication and division we get better relative error bounds than the ones previously published. For addition of two double-word numbers, we show that the previously published bound was incorrect, and we provide a new relative error bound. We introduce new algorithms for division. We also give examples that illustrate the tightness of our bounds.

George Labahn (U. Waterloo, Canada)

Friday February 3, 2017 at 10h15, room 116, Monod site

Sigma Bases, Kernel Bases and their use in fast computation for polynomial matrices

In this talk we discuss sigma bases and kernel bases of polynomial matrices and illustrate their role in fast algorithms for arithmetic with polynomial matrices. In particular we focus on the use of kernel bases for triangulation, computing determinants and Hermite normal form computation.

Joint work with Vincent Neiger (Lyon and TU Denmark) and Wei Zhou (Waterloo)

Marc Mezzarobba (CNRS, UPMC)

Thursday January 26, 2017 at 10h15, room 116, Monod site

Rigorous Multiple-Precision Evaluation of D-Finite Functions in SageMath

We present an implementation, on top of the SageMath computer algebra system, of a collection of tools for computing numerical values of D-finite functions. Recall that a complex analytic function is D-finite when it satisfies an ordinary differential equation whose coefficients are polynomial in the independent variable. D-finite functions can be viewed as a class of special functions analogous to those of algebraic or hypergeometric functions, but more general. They come up in areas such as analytic combinatorics and mathematical physics, and lend themselves well to symbolic manipulation by computer algebra systems.

The main task our package performs is the “numerical analytic continuation” of D-finite functions. This process results in a numerical approximation of the transition matrix that maps “initial values” of an ODE somewhere on the complex plane to “initial values” elsewhere that define the same solution. Numerical analytic continuation can then be used to compute things like values or polynomial approximations of D-finite functions anywhere on their Riemann surfaces, and monodromy matrices of differential operators. The code supports the important limit case where the (generalized) initial values are provided at regular singular points of the ODE, making it possible in particular to compute connection constants between regular singularities. It is rigorous in the sense that it returns interval results that are guaranteed to enclose the exact mathematical result.

The new package can be considered a successor of NumGfun, a Maple package by the same author with broadly similar features.

Brice Minaud (RHUL)

Thursday January 19, 2017 at 10h15, room 116, Monod site

Cryptanalysis of the CLT15 Multilinear Map over the Integers

Multilinear maps are a powerful cryptographic primitive: a wide array of cryptographic schemes can be built from multilinear maps, including some for which there is no other known construction, such as non-interactive multipartite key exchange, or general program obfuscation. However to this day there is no provable construction of multilinear maps. Moreover both of the original constructions (proposed in 2013 by Garg, Gentry, and Halevi, and by Coron, Lepoint, and Tibouchi) have been broken for their most direct application, namely non-interactive multipartite key exchange. This has led Coron, Lepoint and Tibouchi to introduce an improved variant of their original construction at Crypto 2015. We propose a polynomial attack on this new construction, which fully breaks the scheme. The presentation will of course begin with a general introduction to multilinear maps and the CLT construction, which aims to be accessible to all; and will conclude with a brief overview of the current state of the topic.

Anastasia Volkova (UPMC Paris 6, LIP6)

Thursday January 5, 2017 at 10h15, room 116, Monod site

Towards reliable implementation of digital filters

Linear digital filters are frequently used in digital signal processing. We are interested in implementation of such algorithms in Fixed-Point Arithmetic. However, on each step of an implementation various issues arise. We propose an automatised filter-to-code generator which encompasses the filter implementation flow and provides a consistent error-analysis of involved algorithms.

In this talk we make an overview of the filter-to-code generator and focus on determining the Fixed-Point formats for an implementation that is optimal with respect to a certain criteria. We consider Linear Time Invariant filters in state-space representation. The computational errors in the intermediate steps of the filter evaluation as well as their accumulation over time are fully taken into account. Our approach is fully rigorous in the way that the output Fixed-Point formats are shown to be free of overflows and that we do not use any (non-exhaustive) filter simulation steps but proofs.

Crypto and Lattices Days

Thursday December 15, 2016 at 10h15, room 116, Monod site

TBA

Cf. the Web page of the Crypto and Lattices Days

Crypto and Lattices Days

Thursday November 24, 2016 at 10h15, room 116, Monod site

TBA

Cf. the Web page of the Crypto and Lattices Days

Paola Boito (U. Limoges and LIP, AriC team)

Thursday October 6, 2016 at 10h15, room 116, Monod site

Quasiseparable structure and polynomial rootfinding

Computing the roots of a univariate polynomial is a classical task in computer algebra and numerical analysis and it is often solved via a companion matrix approach. But can we exploit the matrix structures that arise in this problem to devise fast eigensolvers and rootfinding algorithms, without losing accuracy?

Motivated by this problem, we

- review some useful rank structures of Frobenius and Chebyshev companion matrices and pencils,
- introduce the quasiseparable representation for such matrices and pencils,
- show how to adapt suitable versions of the Francis/Kublanovskaya QR algorithm to the quasi separable representation, so that the arithmetic complexity of each iteration is linear in the size of the matrix,
- comment on numerical results, applications, and perspectives for further work.

This is joint work with Luca Gemignani (University of Pisa) and Yuli Eidelman (University of Tel Aviv).

Serge Torres (LIP, AriC team)

Thursday September 22, 2016 2:30 PM, in lecture theater B, at the 3d floor of the GN1 building, Monod Campus.

PhD defense: Tools for the Design of Reliable and Efficient Function Evaluation Libraries

Can you tell me more about that?

The design of function evaluation libraries is a complex task that requires great care and dedication, especially when one wants to satisfy high standards of reliability and performance. In practice, it cannot be correctly performed, as a routine operation, without tools that not only help the designer to find his way in a complex and extended solution space but also guarantee that his solutions are correct and (possibly) optimal.

As of the present state of the art, one has to think in terms of “toolbox” from which he has to smartly mix-and-match the utensils that better fit his goals rather than expect to have at hand a solve-all automatic device.

What is it that you have done?

The work presented here is dedicated to the design and implementation of such specialized tools in two realms:

- the radical consolidation of Ziv’s rounding test that is used, in a more or less empirical way for ages, in the implementation of functions approximation;
- the development of an implementation of the SLZ-algorithm in order to “solve” (in a specific sense that will be made clear) the Table Maker Dilemma; the suitability of this implementation is also challenged for the development of correctly rounded approximations of functions with quad-precision floating-point (aka IEEE-754 Binary128 format) arguments and images.

About the defense, see the report on the intranet of ENS de Lyon.

Rémi Imbach (Inria Nancy – Grand Est, VEGAS Team)

Thursday September 15, 2016 at 10h15, room 116 (site Monod M1)

Certified numerical tools for computing the topology of projected curves

We are interested in computing the topology of the projection of an algebraic or analytic space curve in the plane. Such a projection is not a smooth curve and has singularities. State of the art approaches to compute the topology of algebraic plane curves use symbolic calculus but in the case of a projection, latter approaches suffer from the size of the implicit representation of the curve as a resultant polynomial.

Using numerical tools to compute the projected curve or its singularities is a challenging problem since the projected curve is not smooth, and state-of-the-art characterizations of the singularities use over-determined systems of equations. We will first propose a new characterization of the singularities of the projection of an algebraic curve using a square system polynomials; its solutions are regular and it can be solved numerically.

However its equations are coefficients of resultant polynomials and are still very large polynomials. The demand in arithmetic precision to carry out numerical calculus with such polynomials makes classical solvers either incomplete or dramatically time consuming. We will propose a multi-precision solver using interval subdivision specially designed to handle large polynomials to compute the solutions of this system. We will then consider the more general case of projections of analytic space curves, and propose a geometric approach to describe its singularities. It results in a square system of equations with only regular solutions, that do not involve resultant theory, and that can be solved with our certified numerical solver. Finally we will present a new approach to compute the topology of the projected curve, i.e. find a graph that has the same topology. We use a certified numerical path tracker to enclose the space curve in a sequence of boxes, allowing both to simplify the research of singularities and to compute smooth branches linking singularities.

# 2015-2016

Philippe Dumas (Inria)

Thursday June 30, 2016 at 10h15, room 116 (site Monod M1)

Some solutions of linear Mahler equations

Mahler equations relate evaluations of the same function \(f\) at iterated \(b\)th powers of the variable. They arise in particular in the study of automatic sequences and in the complexity analysis of divide-and-conquer algorithms. Recently, the problem of solving Mahler equations in closed form has occurred in connection with number-theoretic questions.

A difficulty in the manipulation of Mahler equations is the exponential blow-up of degrees when applying a Mahler operator to a polynomial. We present and analyze polynomial-time algorithms for solving linear Mahler equations for series, polynomials, and rational functions.

Svyatoslav Covanov (Université de Lorraine)

Thursday June 23, 2016 at 10h15, room 116 (site Monod M1)

Fast integer multiplication using generalized Fermat primes

For almost 35 years, Schönhage-Strassen’s algorithm has been the fastest algorithm known for multiplying integers, with a time complexity \(O(n⋅\log n⋅ \log \log n)\) for multiplying n-bit inputs. In 2007, Fürer proved that there exists \(K>1\) and an algorithm performing this operation in \(O(n⋅\log n⋅K^{(\log^∗ n)})\). Recent work by Harvey, van der Hoeven, and Lecerf showed that this complexity estimate can be improved in order to get \(K=8\), and, via a conjecture on Mersenne primes, \(K=4\).

This presentation will describe an alternative algorithm using generalized Fermat primes for which \(K=4\) conjecturally. This algorithm seems more simple than the algorithm relying on Mersenne primes and can be seen as a fix to an approach proposed by Fürer in 1989 using Fermat primes.

AriC team, LIP

Thursday June 16, 2016 at 10h15, room 116 (site Monod M1)

Crypto’s results fair

Short presentation (10mn) of the results submitted to a conference on cryptography.

**Shi Bai**:*A subfield lattice attack for overstretched NTRU.*

Accepted to CRYPTO.

Joint work with Martin Albrecht and Léo Ducas.**Benoît Libert**:*Fully Secure Functional Encryption for Inner Products from Standard Assumptions.*

Accepted to CRYPTO.

Joint work with Shweta Agrawal and Damien Stehlé**Fabrice Mouhartem**:*Lattice-Based Group Encryption.*

In progress.

Joint work with Benoît Libert.**Willy Quach**:*Circular Security and Fully Homomorphic Encryption.*

M2 internship.**Somindu Ramona**:*Functional Commitment Schemes: From Polynomial Commitments to Pairing-Based Accumulators from Simple Assumptions.*

Accepted to ICALP.

Joint work with Benoît Libert and Moti Yung.

Pierre-Jean Spaenlehauer (Inria Nancy – Grand Est)

Thursday June 9, 2016 at 10h15, room 116 (site Monod M1)

Sparse polynomial systems with many positive solutions from bipartite simplicial complexes

We consider the problem of constructing multivariate polynomial systems with real coefficients, prescribed monomial support and many non-degenerate real solutions with positive coordinates. For some families of monomial supports, we use a version of Viro’s method to construct polynomial systems all complex solutions of which are real and positive. We also use this construction to investigate special cases of the following problem: how many non-degenerate solutions in the positive orthant can a system of n equations in n variables involving at most t distinct monomials have? In particular, we study a family of fewnomial systems obtained by considering a simplicial complex contained in a regular triangulation of the cyclic polytope: using a symbolic-numeric approach to a matrix completion problem, we will present a method to construct a system of 5 equations in 5 variables, involving only 11 distinct monomials and having 38 positive solutions.

Joint work with Frédéric Bihan.

Weiqiang Wen (AriC team, LIP)

Thursday June 2, 2016 at 10h15, room 116 (site Monod M1)

Improved Reduction from the Bounded Distance Decoding Problem to the Unique Shortest Vector Problem in Lattices

We present a probabilistic polynomial-time reduction from the lattice Bounded Distance Decoding (BDD) Problem with parameter \(1/(\sqrt{2} \cdot \gamma)\) to the unique Shortest Vector Problem (uSVP) with parameter \(\gamma\) for any \(\gamma>1\) that is polynomial in the lattice dimension \(n\). It improves the BDD to uSVP reductions of [Lyubashevsky and Micciancio, CRYPTO, 2009] and [Liu, Wang, Xu and Zheng, Inf. Process. Lett., 2014], which rely on Kannan’s embedding technique. The main ingredient to the improvement is the use of Khot’s lattice sparsification [Khot, FOCS, 2003] before resorting to Kannan’s embedding, in order to boost the uSVP parameter.

Laurent Busé (Galaad, Inria Sophia Antipolice)

Thursday May 26, 2016 at 10h15, room 116 (site Monod M1)

Matrix representations of 3D algebraic parameterizations

3D algebraic parameterizations of curves and surfaces are intensively used in modeling geometry to provide piecewise descriptions of 3D objects. For many purposes it is important to derive implicit representations from parameterizations, for instance visualization or intersection problems. In this talk, I will discuss an elimination method to compute matrix-based implicit representations that can adapt dynamically the geometric properties of a given parameterization, typically its base locus. I will also show how these matrix representations turn several geometric problems into linear algebra problems, allowing a better control of their numerical treatment in the context of approximate data and/or computations.

Cindy Rubio-Gonzalez

Please note: unusual date and place!

Friday May 13, 2016, 9h00-11h00, St Germain au Mont d’Or, in the Domaine des Hautannes

Numerical precision

More information on the page of the thematic school on **numerical simulation and polyhedral code optimisations**

Peter Schwabe (Radboud University)

Thursday April 14, 2016 at 10h15, Amphi I on the 14th (main building, Level 0, North side) site Monod

Post-quantum key exchange — a new hope

At IEEE Security & Privacy 2015, Bos, Costello, Naehrig, and Stebila proposed an instantiation of Peikert’s ring-learning-with-errors–based (Ring-LWE) key-exchange protocol (PQCrypto 2014), together with an implementation integrated into OpenSSL, with the affirmed goal of providing post-quantum security for TLS. In this talk I will present joint work with Alkim, Ducas, and Pöppelmann, in which we revisit their instantiation and stand-alone implementation. Specifically, we propose new parameters and a better suited error distribution, analyze the scheme’s hardness against attacks by quantum computers in a conservative way, introduce a new and more efficient error-reconciliation mechanism, and propose a defence against backdoors and all-for-the-price-of-one attacks. By these measures and for the same lattice dimension, we more than double the security parameter, halve the communication overhead, and speed up computation by more than a factor of 9 in a portable C implementation and by more than a factor of 24 in an optimised implementation targeting current Intel CPUs. These speedups are achieved with comprehensive protection against timing attacks.

The talk will be based on this article.

Joint work with Erdem Alkim, Léo Ducas and Thomas Pöppelmann.

More information on the Web page of the “Monthly lattice and crypto meetings”

Wednesday 6 April – Friday 8 April, 2016, amphi Descartes, ENS de Lyon

HPC Days in Lyon

see the program

Fabrice Benhamouda (ENS Paris)

Thursday March 3, 2016 at 10h15, salle des thèses site Monod

Easing Coppersmith Methods using Analytic Combinatorics: Applications to Public-Key Cryptography with Weak Pseudorandomness

The *Coppersmith methods* is a family of lattice-based techniques to find small integer roots of polynomial equations. They have found numerous applications in cryptanalysis and, in recent developments, we have seen applications where the number of unknowns and the number of equations are non-constant. In these cases, the combinatorial analysis required to settle the complexity and the success condition of the method becomes very intricate. We provide a toolbox based on *analytic combinatorics* for these studies. It uses the structure of the considered polynomials to derive their generating functions and applies complex analysis techniques to get asymptotics. The toolbox is versatile and can be used for many different applications, including multivariate polynomial systems with arbitrarily many unknowns (of possibly different sizes) and simultaneous modular equations over different moduli. To demonstrate the power of this approach, we apply it to recent cryptanalytic results on number-theoretic pseudorandom generators for which we easily derive precise and formal analysis. We also present new theoretical applications to two problems on RSA key generation and randomness generation used in padding functions for encryption.

The talk will be based on this article.

Joint work with Céline Chevalier, Adrian Thillard, and Damien Vergnaud.

More information on the Web page of the “Monthly lattice and crypto meetings”

AriC team, LIP

Thursday February 18, 2016 at 10h15, room 116 (site Monod M1)

ISSAC’ and ARITH’ results fair

Short presentation (10mn) of the results submitted either to ARITH23 or to ISSAC 2016.

- Claude-Pierre Jeannerod, Nicolas Louvet, Jean-Michel Muller and Antoine Plet: A library for symbolic floating-point arithmetic
- Vincent Lefèvre: Correctly Rounded Arbitrary-Precision Floating-Point Summation
- Julien Le Maire, Nicolas Brunie, Florent de Dinechin and Jean-Michel Muller: Computing floating-point logarithms with fixed-point operations
- Clément Pernet: Computing with Quasi-Separable Matrices
- Stephen Melczer and Bruno Salvy: Symbolic-Numeric Tools for Analytic Combinatorics in Several Variables
- Jean-Guillaume Dumas, Erich Kaltofen, Emmanuel Thomé and Gilles Villard:Linear time interactive certificates for the minimal polynomial and the determinant of a sparse matrix

AriC team, LIP

Thursday February 11, 2016 at 10h15, room 116 (site Monod M1)

ISSAC’ and ARITH’ results fair

Short presentation (10mn) of the results submitted either to ARITH23 or to ISSAC 2016:

- Jean-Michel Muller, Valentina Popescu and Peter Tang: A new multiplication algorithm for extended precision using floating-point expansions
- Claude-Pierre Jeannerod, Vincent Neiger, Eric Schost and Gilles Villard: Fast computation of minimal interpolation bases in Popov form for arbitrary shifts
- Vincent Neiger: Fast computation of shifted Popov forms of polynomial matrices via systems of modular polynomial equations
- Alin Bostan, Gilles Christol, and Philippe Dumas: Fast computation of the Nth term of an algebraic series in positive characteristic
- Alin Bostan, Louis Dumont and Bruno Salvy: Efficient Algorithms for Mixed Creative Telescoping

Marie Paindavoine (AriC team, LIP and Applied Crypto Group, Orange Labs)

Thursday January 28, 2016 at 10h15, room 116 (site Monod M1)

Speeding up fully homomorphic encryption

Access to encrypted data is often “all-or-nothing” : either one has to decrypt everything, or one cannot read anything. Following Gentry’s work in 2009, fully homomorphic encryption has gained more and more attention, in particular because it allows more flexibility on encrypted data access and process. It is now possible to evaluate functions on encrypted data and get only the result – encrypted as well. This possibility offers numerous applications, especially when building a trustworthy cloud provider where the server cannot access the users’ data. It theoretically reconciles a rich user experience with protection of her sensible data. However, efficiency of fully homomorphic encryption remains seriously undermined by a very costly procedure: the “bootstrapping”. In this talk, we will show how to use graph problems and integer linear programming in order to determine the minimal number of bootstrappings necessary to correctly evaluate a circuit over encrypted data. Our method allows significant efficiency gains in the evaluation process, saving up to 70% bootstrappings calls. This is a joint work with Bastien Vialla.

Thursday January 21, 2016 at 10h15: NO SEMINAR

Reminder: Journées du GDR IM, LIPN, Université Paris 13 (campus de Villetaneuse), 18-20 January 2016.

Lattice Meetings, 21 January 2016 at ENS Paris.

Sylvain Collange (ALF team, Inria Rennes / IRISA)

Thursday January 14, 2016 at 10h15, room 116 (site Monod M1)

General-purpose SIMT processors

We propose to generalize the SIMT execution model of GPUs to general-purpose processors, and use it to design new CPU-GPU hybrid cores. These hybrid cores will form the building blocks of heterogeneous architectures bringing together CPU-like cores and GPU-like cores that all share the same instruction set and programming model.

The SIMT execution model used on some GPUs binds together multiple threads in parallel applications and run them in lockstep so that multiple threads perform the same instruction at the same time on different data. This allows to execute the dynamic vector instructions on energy-efficient SIMD units.

Unfortunately, current GPU architectures lack the flexibility to work with standard instruction sets like x86 or ARM. Their implementation of SIMT requires special instruction sets with control-flow reconvergence annotations, and they do not support complex control flow like exceptions, context switches and thread migration. This prevents the use of a general-purpose system software stack and breaks compatibility with CPU environments.

We will see how we can overcome all of these limitations and extend the SIMT model to conventional instruction sets by using a PC-based instruction fetch policy. In addition to enabling SIMT execution of conventional instruction sets, this solution enables key improvements that were not possible in the traditional SIMD model, such as simultaneous execution of divergent paths. It also opens the way for a whole spectrum of new architectures, hybrids of latency-oriented superscalar processors and throughput oriented SIMT GPUs.

No AriC seminar on Thursday December 17, 2015 at 10h15.

Talk by Alin Bostan at Séminaire SIESTE : Séminaire d’Informatique pour les Étudiant.e.s, Scientifiques, et Tous ceux que l’informatique intéresse à l’ENS Lyon, Tuesday December 15, 13h30, amphi B site Monod.

Journée *Structures Discrètes* on Thursday December 17.

Arnold Neumaier (University of Vienna, Austria, within AriC team, LIP, until the end of December 2015)

Thursday December 10, 2015 at 10h15, room 116 (site Monod M1)

New basis reduction algorithms for lattices

In this talk I first review the state of the art in basis reduction techniques for lattices (relevant among others in cryptography and in linear integer programming).

Then I describe improved analysis techniques discovered during my current sabbatical at ENS Lyon. They allow one to prove for new variations of traditional reduction algorithms complexity bounds and reduced basis guarantees that are both stronger than the existing results.

Mohsin Javed (Numerical Analysis Group, University of Oxford)

**Please note the unusual date and time.**Tuesday December 8, 14h room 116 (site Monod M1)

Polynomial and Rational Approximations in Chebfun and Applications to Digital Filters

In this talk we will look at algorithms designed in Chebfun to compute best approximations of continuous periodic and aperiodic functions. We will also discuss Chebfun’s capabilities to compute best approximations on compact subsets of an interval and how these methods can be used to design FIR digital filters. In addition, we will also look at rational and Fourier-Padé approximations and explore how these methods may be used to design recursive IIR filters.

The talk will also discuss the closely related Caratheodory-Fejer (CF) approximation of a function. The CF approximation is a near best approximation that can be computed numerically by solving an eigenvalue problem. CF approximation for a very modest degree can be within machine precision of the best polynomial and rational approximation of a given smooth function. From a numerical point of view, it is much more interesting to consider the rational as opposed to the polynomial CF approximation problem. In particular, the rational CF approximation can be an excellent initial guess for the true best approximation. We will explore these topics during the talk, however, this is still very much a work in progress.

Ludovic Perret (UPMC)

Thursday November 26, 2015 at 10h15, lecture room 116 site Monod

Algebraic Algorithms for LWE

The Learning with Errors (LWE) problem, proposed by Regev in 2005, has become an ever-popular cryptographic primitive, due mainly to its simplicity, flexibility and convincing theoretical arguments regarding its hardness. Among the main proposed approaches to solving LWE instances — namely, lattice algorithms, combinatorial algorithms, and algebraic algorithms — the last is the one that has received the least attention in the literature, and is the focus of this talk. We present a detailed and refined complexity analysis of the original Arora-Ge algorithm, which reduced LWE to solving a system of high-degree, error-free polynomials. Moreover, we generalise their method and establish the complexity of applying Gröbner basis techniques from computational commutative algebra to solving LWE. As a result, we show that the use of Gröbner basis algorithms yields an exponential speed-up over the basic Arora-Ge algorithm. On the other hand, our results show that such techniques do not yield a sub exponential algorithm for the LWE problem. We also apply our algebraic algorithm to the BinaryError-LWE problem, which was recently introduced by Micciancio and Peikert. We show that BinaryError-LWE in dimension \(n\) can be solved in subexponential time given access to a quasi-linear number of samples \(m\) under a regularity assumption. We also give precise complexity bounds for BinaryError-LWE in function of the number of samples. The results in this work depend crucially on the assumption that the encountered systems have no special structure. We give experimental evidence that this assumption holds and also prove the assumptions in some special cases. Therewith, we also make progress towards proving Froberg’s long-standing conjecture from algebraic geometry.

The talk will be based on this article.

Joint work with Martin R. Albrecht, Carlos Cid and Jean-Charles Faugère.

More information on the Web page of the “Monthly lattice and crypto meetings”

Stephen Melczer (AriC team, LIP, ENS de Lyon and University of Waterloo)

Thursday November 19, 2015 at 10h15, lecture room 116 site Monod

Analytic combinatorics in several variables and lattice path enumeration

Recent work in the study of analytic combinatorics in several variables has shown how to derive asymptotics for the coefficients of certain families of D-finite functions by representing them as diagonals of multivariate rational functions. Although a theoretical framework has been laid over the last two decades (and before), there are still obstacles which has made the ultimate dream of an effective implementation of these techniques elusive. We begin with an overview of the classical (univariate) theory of analytic combinatorics, followed by a survey of the differences and difficulties present in the multivariate extension. Finally, we look at applications of these methods to the enumeration of lattice paths in restricted regions.

This is joint work with Mark Wilson, George Labahn, Bruno Salvy, and Marni Mishna.

Ignacio Garcia-Marco (Team MC2, LIP, ENS de Lyon)

Thursday October 22, 2015 at 10h15, lecture room 116 site Monod

Lower Bounds by Birkhoff Interpolation

In this talk we provide some lower bounds for the representation of real univariate polynomials as sums of powers of degree 1 polynomials. We present two families of polynomials of degree \( d \) such that the number of powers that are required in such a representation must be at least of order \( d \). This is clearly optimal up to a constant factor. Previous lower bounds for this problem were only of order \( \Omega(\sqrt{d}) \), and were obtained from arguments based on Wronskian determinants and “shifted derivatives”. We obtain this improvement thanks to a new lower bound method based on Birkhoff interpolation (also known as “lacunary polynomial interpolation”).

Preprint available here.

This is a joint work with Pascal Koiran.

Clément Pernet (Team AriC, LIP, on partial secondment from U. Grenoble Alpes)

Thursday October 8, 2015 at 10h15, lecture room 116 site Monod

Computing the Rank Profile Matrix.

The row (resp. column) rank profile of a matrix describes the stair case shape of its row (resp. column) echelon form.

We will first propose a recursive Gaussian elimination algorithm that can compute simultaneously the row and column rank profiles of a matrix, as well as those of all of its leading sub-matrices, in the same time as state of the art Gaussian elimination algorithms. More generally, we will study the conditions making a Gaussian elimination algorithm reveal this information. We propose the definition of a new matrix invariant, the rank profile matrix, summarizing all information on the row and column rank profiles of all the leading sub-matrices. We also explore the conditions for a Gaussian elimination algorithm to compute all or part of this invariant, through the corresponding PLUQ decomposition. As a consequence, we show that the classical iterative CUP decomposition algorithm can actually be adapted to compute the rank profile matrix. Used, in a Crout variant, as a base-case to the recursive algorithm, it delivers a significant improvement in efficiency. Lastly, the row (resp. column) echelon form of a matrix are usually computed via different dedicated triangular decompositions. We show here that, from some PLUQ decompositions, it is possible to recover the row and column echelon forms of a matrix and of any of its leading sub-matrices thanks to an elementary post-processing algorithm.

Joint work with Jean-Guillaume Dumas and Ziad Sultan.

Khoa Nguyen (NTU, Singapore)

Thursday October 1st, 2015 at 10h15, lecture room 116 site Monod

A provably secure group signature scheme from code-based assumptions

We solve an open question in code-based cryptography by introducing the first provably secure group signature scheme from code-based assumptions. Specifically, the scheme satisfies the CPA-anonymity and traceability requirements in the random oracle model, assuming the hardness of the McEliece problem, the Learning Parity with Noise problem, and a variant of the Syndrome Decoding problem. Our construction produces smaller key and signature sizes than the existing post-quantum group signature schemes from lattices, as long as the cardinality of the underlying group does not exceed the population of the Netherlands (approx \( 2^{24} \) users). The feasibility of the scheme is supported by implementation results. Additionally, the techniques introduced in this work might be of independent interest: a new verifiable encryption protocol for the randomized McEliece encryption and a new approach to design formal security reductions from the Syndrome Decoding problem.

The talk will be based on this article. Joint work with Martianus Frederic Ezerman, Hyung Tae Lee, San Ling and Huaxiong Wang.

More information on the Web page of the “Monthly lattice and crypto meetings”

Fabrice Mouhartem (LIP, ENS Lyon)

Thursday September 24, 2015 at 10h15, lecture room 116 site Monod

Lattice-based group signatures for dynamic groups

Introduced by Chaum and Van Heyst in 1991, group signatures provide a way for a person to sign messages on behalf of a group he belongs to while remaining anonymous inside this group. At the same time, users remain accountable for their signatures: if necessary, an opening authority is able to trace any signature back to its issuer. The concept has many applications, going from trusted computing to anonymous access control (e.g., to public transportation systems or to the building of a company).

In 2010, Gordon et al. described a first realization based on lattice assumptions, which was subsequently improved in several works. For the time being, all known lattice-based constructions are for static groups, where no new member can be added after the setup phase. Designing a dynamic group signature based on lattices has remained an open problem for the last four years. In this work, we propose a first dynamic group signature based on standard lattice assumptions (in the random oracle model): the Learning-with Errors (LWE) and the Short Integer Solution (SIS) problems. To achieve this, we adapt a construction due to Ling et al. (PKC’15) by introducing a joining mechanism – namely a protocol whereby new members can dynamically join the group – which provides security against framing attacks: a security notion proper to the dynamic case. We design this mechanism by combining the Boyen signature scheme (PKC’10) with the Böhl et al. signature (Eurocrypt’13). In particular, the latter technique makes it possible to introduce a so-called membership secret which is only known to the introduced group member and correspond to a public syndrome that gets certified by the group manager during the joining protocol. Our security proof requires an analysis based on the Rényi divergence (rather than the statistical distance) in order to enable a more efficient choice of parameters.

Sylvain Chevillard (APICS team, INRIA Sophia-Antipolis)

Thursday September 17, 2015 at 10h15, lecture room 116 site Monod

Minimax principle and lower bounds in \(H^2\) rational approximation

I will present methods allowing one to derive some lower bounds in rational approximation of given degree to functions in the Hardy space H2 of the disk. The algorithms that solve the problem of best rational approximation to a function usually only find a ‘candidate’, in the sense that they find a local minimum of the criterion, with good hope that this minimum is indeed global. Providing a good lower bound to this problem is thus an interesting complement to such solvers, as it gives an interval of confidence for the true optimal value.

A first method is based on Adamjan-Arov-Krein theory and leads to a lower bound that is fairly easy to compute from a numerical point of view. This bound is also of theoretical interest, as it allowed us to derive asymptotic errors rates in approximation to Blaschke products and to Cauchy integrals on geodesic arcs.

A second method is based on linearized errors and leads to more involved numerical computations, less easily implemented. Yet, results on a few examples show that this method can compute lower bounds that are fairly sharp with respect to the true optimal error.

I will present both methods and discuss the difficulties in their practical implementation. They will be illustrated by numerical results on a few examples.

This is a joint work with Laurent Baratchart and Tao Qian.

Arnold Neumaier (U. Vienna, Austria and Team AriC, LIP, ENS de Lyon)

Thursday September 10, 2015 at 10h15, lecture room 116 site Monod

Integer least squares and basis reduction

The solution of integer least squares problems is closely related to certain lattice problems, in particular the closest vector problem and basis reduction. Applications in parameter estimation problems have different characteristics from those in number theory, computer algebra, or cryptography in that the initial basis is known only to limited accuracy, and the demands on the quality of the solution are therefore moderate only.

In this lecture I’ll give an introduction to the problem, outline methods and their properties. I conclude with a list of lattice-related problems that I am interested in seeing solved.

# Archives of the seminar

For seminars in previous years, see former AriC seminars.