Exascale Computing

Exascale Computing

Lundi 30 janvier 2012

Jean-Yves Berthou (EDF) et Jean-Yves L’Excellent (LIP)

Résumé Exposé J-Y Berthou:

Addressing the Challenge of Exaflopic Computation

Exaflopic systems, composed of millions of heterogeneous cores will appear at the end of this decade. This technological breakthrough will engage the HPC community in defining new generations of applications and simulation platforms. The challenge is particularly severe for multi-physics, multi-scale simulation platforms that will have to combine massively parallel software components developed independently from each others. Another difficult issue is to deal with legacy codes, which are constantly evolving and have to stay in the forefront of their disciplines. This will also require new compilers, libraries, middleware, programming environments (including debuggers and performance optimizers), languages, as well as numerical methods, code architectures, and pre- and post-processing tools (e.g., for mesh generation or visualization).

The goal of the European Exascale Software Initiative project (EESI) is to build a European roadmap along with a set of recommendations to address the challenge of performing scientific computing on this new generation of computers. The talk will present the objective, motivations and results of EESI.

Résumé J-Y L’Excellent

High-performance sparse direct solvers

Direct methods for the solution of sparse systems of linear equations are extensively used in a wide range of numerical simulation applications. Such methods are based on a matrix decomposition of the form A = LU, LLt, LDLt or QR, followed by triangular solves. In comparison to iterative methods, they are known for their numerical robustness. However, they are also characterized by a high memory consumption (especially for 3D problems) and a large amount of computations. In this talk, we present some of the challenges that need to be tackled to solve huge problems with direct methods on the emerging computing platforms: (i) exploiting more parallelism, (ii) optimizing memory usage, (iii) maintaining numerical stability without significant sacrifice on performance. We present this in the context of the MUMPS solver (see http://graal.ens-lyon.fr/MUMPS or http://mumps.enseeiht.fr), a parallel sparse direct solver developed in Toulouse, Lyon-Grenoble, and Bordeaux.