As a follow-up of Arith-22, tutorials are organized on computer-arithmetic related topics, the 25 of June 2015.
Tutorials will take place in amphitheater B, 3rd floor of ENS Lyon. Each tutorial will be 2 hours long.
Registration is free but required for organizational purposes. Register here (you have to create an account first to be able to register).
Solving matrix eigenvalue problems is crucial in many scientific and engineering applications. Robust solvers for problems of moderate size are well developed and widely available. These are sometimes referred to as direct solvers. Direct solvers typically calculate the entire spectrum of the matrix or matrix pencil in question. Yet in many applications, especially for those where the underlying linear systems are large and sparse, it is often the case that only a portion of the spectrum is of interest. Recently, a number of new approaches to this problem are developed that share a common technique based on numerical integration. In this tutorial, I aim to share with the audience this core method of constructing an approximate invariant subspace projector based on numerical integration and the use of it to produce a number of useful algorithms for the eigenvalue problem mentioned above.
I will introduce the FEAST algorithm for Hermitian eigenproblems that is based on the standard subspace iteration but accelerated by this integral method. Generalizing to non-Hermitian problems will also be described. I will also describe a family of integral based solvers developed by Sakurai and collaborators.
If time permits, a number of related topics can be discussed: Approximate projector can also be used in conjunction with the Lanczos algorithm. Extra precision floating-point summation can be applied in numerical integration process to help maintain numerical orthogonality of computed eigenvectors. Approximate invariant subspace projectors can also be used to offer a quick sketch of the spectral distribution of a matrix.
Mathematical libraries (so called libm) provide floating-point approximations of elementary functions such as exponential, trigonometric functions, and so on. Not only should such functions execute safely, but they should also return an accurate result, that is, a result close from the ideal mathematical function. There might be several causes of inaccuracy: one can evaluate only a finite number of operations (no evaluation of complete power series); these operations have limited precision (floating-point arithmetic); accuracy is also lost during argument reduction and result reconstruction.
Verifying an implementation thus means verifying that none of these sources of errors can lead to an inaccurate result. Some automated tools can tackle parts of the verification, but human reasoning is still sorely needed. Indeed, the floating-point code is usually so contrived that reasoning just on the code is not sufficient to prove its correctness, one also has to know the mathematical reasons that led to choosing this code in the first place. Such correctness proofs are so error-prone that relying on a pen-and-paper approach might not offer sufficient guarantees. This advocates for doing formal proofs with a proof assistant such as Coq. But, while this gives the highest level of confidence, this makes the verification process even more long and tedious.
This tutorial will present some tools for (partly) automating this verification process, what their limitations are, and how a bit of user knowledge can overcome them. The tutorial will use the floating-point approximation of exponential from Cody and Waite as a running example. While this implementation is no longer state-of-the-art, it nonetheless shows some of the challenges one encounters when formally proving the correctness of a mathematical library.
Integer factorization is a famous problem which has application in many areas, especially in number theory and in cryptography. The flagship example is probably the security of the widespread RSA cryptosystem, which relies critically on the supposed hardness of this problem: an RSA public key includes a so-called modulus which is the product of two large prime numbers, and the best method known for computing the corresponding private key amounts to factoring this modulus.
The most efficient generic algorithm currently known for factoring large integers is the Number Field Sieve (NFS), whose asymptotic complexity grows exponentially in the cube-root of the size of the integer being factored. It is however a highly complex algorithm, which depends on various number-theoretic algorithms used as subroutines (lattice basis reduction, sieving techniques, factorization of smaller integers, structured Gaussian elimination, sparse linear algebra over GF(2), etc.).
CADO-NFS (http://cado-nfs.gforge.inria.fr/) is a free-software implementation of NFS, mainly developed in the CARAMEL group in Nancy, with contributions from the GRACE (LIX, Saclay) and ECO teams (LIRMM, Montpellier). It has now reached a sufficient level of maturity so that people with no prior knowledge of the inner workings of NFS can use it for medium- to large-scale computations without too much trouble.
The goal of this tutorial will be to guide the audience through the various parts of the CADO-NFS software, while trying to give an intuition of the corresponding steps in the NFS algorithm. This will be illustrated by running a few example factorizations on various platforms, ranging from our laptops to clusters of a few hundred cores.
Link to CADO-NFS: CADO-NFS Web page