The advent of parallelism in supercomputers and in more classical end-user computers increases the need for high-level code optimization and improved compilers. The need for power-efficient computing have motivated the raise of various kinds of accelerators like GPU and more recently many-cores and FPGA in data-center. Writing efficient applications for these heterogeneous systems requires new approaches for software, compilers and runtimes.
Parallelism based on dataflow is one way to address this issue. A dataflow application is made of several actors that can perform computations and communicate with other actors. It can be implemented in several ways: as software running on a parallel general-purpose architecture or on accelerators like GPU or many-core, or as hardware implementation, possibly running on reconfigurable chips (FPGA).
The overall objective of the CASH team is to take advantage of the characteristics of the specific hardware (generic hardware, hardware accelerators or FPGA) to compile energy efficient software and hardware. The long-term objective is to provide solutions for the end-user developers to use at their best the huge opportunities of these emerging platforms.
The research directions of the team are:
- Deriving dataflow programs from sequential applications, while structuring the data transfers.
- Compiling and scheduling dataflow programs, combining the traditional tools dedicated to dataflow and specific methods like the polyhedral model.
- Scalable static analyses for general programs, that are efficient enough to allow global analysis of large-scale programs.
- The application of the preceding activities on High Level Synthesis (generation of hardware from a high-level language), with additional resource constraints.
- Simulation of Systems on a Chip: A parallel and scalable simulation of Systems-on-Chips, which, combined with the preceding activity, will result in a complete workflow for circuit design.
For more information, please contact the team leader: Matthieu Moy.