Computer Science: program analysis, compilation.
This last decade was the occasion to see the rise of static analyses that are now no longer reserved for critical embedded systems (airplanes, subways, rockets, etc.). They are now more widely available, as evidenced by the novel implication of companies like Google or Facebook in the domain.
Nevertheless, proving the absence of bugs in a given software (problem which has been known to be intrinsically hard since Turing and Cook) is not the only challenge in software development. Indeed, the ever growing complexity of software increases the need for more trustable optimisations.
Solving these two problems (reliability, optimisation) implies the development of safe (without false negative answers) and efficient (wrt memory and time) analyses, yet precise enough (with few false positive answers). Finding such compromises is the core of my works, whether it deals with formal verification or the more specific domain of analyses inside compilers.
The work which is detailed in this manuscript was done over a period going from 2009 to 2017. During this period, I contributed to static analyses in the two related domains that are software verification and compilation. The proposed analyses have been developed within the abstract interpretation framework, for numerical domains, or, more recently, memory domains. They are also experimentally validated, according to precision and cost criteria, and also according to their impact on client analyses (termination, code optimisation) whenever appropriate.