The difference between theory and practice is that in theory there is none.
PhD
I'm working with Sylvain Guilley and JeanLuc Danger on
formal software methods for cryptosystems implementation security
.
More specifically, my goals are twofold:
first, I'd like to improve the automation of code and data protection against sidechannel attacks
(by reducing as much as possible the information leakage through sidechannels such as time, temperature, or power consumption)
using compilation techniques to formally prove the code transformation
(ensuring that the protection did not alter the semantic of the cryptographic algorithm);
second, I'd like to develop modelization methods and tools to help the formal study of sidechannel and fault injection attacks, and especially the efficiency of countermeasures.

My PhD midterm report,
slides,
and evaluation
(defended on 20131204).
Jury: Karine Heydemann and Jean GoubaultLarrecq.
Publications
 High Precision Fault Attacks on the Instruction Cache of ARMv7M Architectures
 Preprint.
Hardware and software of secured embedded systems are prone to physical attacks. In particular, fault injection attacks revealed vulnerabilities on the data and the control flow allowing an attacker to break cryptographic or secured algorithms implementations. While many research studies concentrated on successful attacks on the data flow, only a few targets the instruction flow. In this paper, we focus on electromagnetic fault injection (EMFI) on the control flow, especially on the instruction cache. We target the very widespread (smartphones, tablets, settopboxes, healthindustry monitors and sensors, etc.) ARMv7M architecture. We describe a practical EMFI platform and present a methodology providing high control level and high replayability over fault injections. Indeed, we observe that a precise fault model occurs in up to 96% of the cases. We then characterize and exhibit this practical fault model on the cache that is not yet considered in the literature. We comprehensively describe its effects and show how it can be used to reproduce well known fault attacks. Finally, we describe how it can benefits attackers to mount new powerful attacks or simplify existing ones.
 Countermeasures Against HighOrder FaultInjection Attacks on CRTRSA
In this paper we study the existing CRTRSA countermeasures against faultinjection attacks. In an attempt to classify them we get to deeply understand the way they work. We show that the many countermeasures (and their variations) that we study actually share a number of common features, but optimize them in different ways. We also show that there is no conceptual distinction between testbased and infective countermeasures and how it is possible to transform one into the other (or vice versa). Furthermore, we show that faults on the code (skipping instructions) can be captured by considering only faults on the data. These intermediate results allow us to improve the state of the art in several ways: (a) we fix an existing and known to be broken countermeasure (namely the one from Shamir); (b) we drastically optimize an existing countermeasure (namely the one from Vigilant) which we reduce to 3 tests instead of 9 in its original version, and prove that it resists not only one fault but also an arbitrary number of randomizing faults; (c) we also show how to upgrade countermeasures to resist any given number of faults: given a correct firstorder countermeasure, we present a way to design a provable highorder countermeasure (for a welldefined and reasonable fault model). Finally, we pave the way for a generic approach against fault attacks for any modular arithmetic computations, and thus for the automatic insertion of countermeasures.
 Formally Proved Security of Assembly Code Against Power Analysis
 PROOFS 2014: PROOFS: Security Proofs for Embedded Systems
In his keynote speech at CHES 2004, Kocher advocated that sidechannel attacks were an illustration that formal cryptography was not as secure as it was believed because some assumptions (e.g., no auxiliary information is available during the computation) were not modeled. This failure is caused by formal methods' focus on models rather than implementations. In this paper we present formal methods and tools for designing protected code and proving its security against power analysis. These formal methods avoid the discrepancy between the model and the implementation by working on the latter rather than on a highlevel model. Indeed, our methods allow us (a) to automatically insert a power balancing countermeasure directly at the assembly level, and to prove the correctness of the induced code transformation; and (b) to prove that the obtained code is balanced with regard to a reasonable leakage model, and we show how to characterize the hardware to use the resources which maximize the relevancy of the model. The tools implementing our methods are then demonstrated in a case study in which we generate a provably protected PRESENT implementation for an 8bit AVR smartcard.
 Formal Analysis of CRTRSA Vigilant's Countermeasure Against the BellCoRe Attack
In our paper at PROOFS 2013, we formally studied a few known countermeasures to protect CRTRSA against the BellCoRe fault injection attack. However, we left Vigilant's countermeasure and its alleged repaired version by Coron et al. as future work, because the arithmetical framework of our tool was not sufficiently powerful. In this paper we bridge this gap and then use the same methodology to formally study both versions of the countermeasure. We obtain surprising results, which we believe demonstrate the importance of formal analysis in the field of implementation security. Indeed, the original version of Vigilant's countermeasure is actually broken, but not as much as Coron et al. thought it was. As a consequence, the repaired version they proposed can be simplified. It can actually be simplified even further as two of the nine modular verifications happen to be unnecessary. Fortunately, we could formally prove the simplified repaired version to be resistant to the BellCoRe attack, which was considered a
challenging issue
by the authors of the countermeasure themselves. A Formal Proof of Countermeasures Against Fault Injection Attacks on CRTRSA
 PROOFS 2013: PROOFS: Security Proofs for Embedded Systems
 JCEN: Journal of Cryptographic Engineering
In this article, we describe a methodology that aims at either breaking or proving the security of CRTRSA implementations against fault injection attacks. In the specific casestudy of the BellCoRe attack, our work bridges a gap between formal proofs and implementationlevel attacks. We apply our results to three implementations of CRTRSA, namely the unprotected one, that of Shamir, and that of Aumüller et al. Our findings are that many attacks are possible on both the unprotected and the Shamir implementations, while the implementation of Aumüller et al. is resistant to all singlefault attacks. It is also resistant to doublefault attacks if we consider the less powerful threatmodel of its authors.
 Can a Program ReverseEngineer Itself?
Shapememory alloys are metal pieces that "remember" their original coldforged shapes and return to the predeformed shape after heating. In this work we construct a software analogous of shapememory alloys: programs whose code resists obfuscation. We show how to pour arbitrary functions into protective envelops that allow recovering the functions' exact initial code after obfuscation. We explicit the theoretical foundations of our method and provide a concrete implementation in Scheme.
 Can Code Polymorphism Limit Information Leakage?
In addition to its usual complexity assumptions, cryptography silently assumes that information can be physically protected in a single location. As one can easily imagine, reallife devices are not ideal and information may leak through different physical sidechannels. It is a known fact that information leakage is a function of both the executed code F and its input x.
In this work we explore the use of polymorphic code as a way of resisting side channel attacks. We present experimental results with procedural and functional languages. In each case we rewrite the protected code F_{i} before its execution. The outcome is a genealogy of programs F_{0}, F_{1}, … such that for all inputs x and for all indexes i ≠ j ⇒ F_{i}(x) = F_{j}(x) and F_{i} ≠ F_{j}. This is shown to increase resistance to side channel attacks.
Selected Talks and Posters
 TRUDEVICE 2015, 20150313.
Towards Generic Countermeasures Against Fault Injection Attacks. 
9th Digital Security Day, "side channel attacks", at GDR SoCSiP, 20141204.
Power Analysis Immunity by Offsetting Leakage Intensity (slides).  Formal Methods and Security seminar at Inria and DGA, 20140314.
Formal Proofs of CRTRSA Countermeasures Against the BellCoRe Attack (slides). 
Poster session, CHES 2013, 20130823.
Formally Proved Security of Assembly Code Against Leakage (poster).  DigiCosme working group, 20130710.
Formally Proved Security of Automatically Protected Software Against Physical Attacks (slides). 
Short talk, COSADE 2013, 20130308.
Software Countermeasures Against DPA Attacks (abstract, slides).
Undergraduate Research
 Mixing Continuous and Discrete Time in a Synchronous Language
March to August 2012, in the ENS/Inria Parkas team in Paris (France).
 Supervised by Marc Pouzet
 report (PDF)
A hybrid system is a system that exhibits both discrete and continous behaviors (e.g., a programmed controller and the evolution of its environment). Zélus is a hybrid synchronous programming language, allowing the programmation and modelization of hybrid systems. The modelization of hybrid systems works by alternating between continuous phases and discrete steps. During continuous phases the continuous state of the system evolves in time with respect to differential equations. At discrete steps the discrete state of the system may change instantaneously.
The main result of this report is a static analysis of Zélus code which detects if there can be an infinite number of discrete steps between two continuous phases. The presence of a finite number of such steps is a necessary condition to ensure time progress during a simulation of the system.
The question of the automatic translation of hybrid automata, a formalism to analyze/modelize hybrid systems, into Zélus code, is also raised. Small results in this direction are shown and then a roadmap for pursuing this translation is proposed. Implicit parallelization of code called from an external and already parallelized environment
April to August 2011, in the UvA Computer Systems Architecture group in Amsterdam (Netherlands).
 Supervised by Clemens Grelck
 report (PDF)
SAC is a purely functional arraybased programming language with a strong focus on implicit parallelization. Our goal is to enable C programmers to take advantage of SAC implicit parallelization through SAC C interface. The difficulty however is that the SAC parallelization mechanism is not designed to be called from already parallelized programs. We aim at solving this problem by redesigning the SAC C interface, but with minimal changes to SAC's efficient implicit parallelization mechanism. To this end we extend the SAC compiler to introduce the concept of XT functions and virtual runtime environments. XT functions allow the implicit parallelization to take place in an already parallel context.
 A formal approach to the development of system services in embedded systems
June to August 2010, in the Verimag Synchrone team in Grenoble (France).
 Supervised by Christophe Rippert, Karin Altisen, and Kévin Marquet
 report (PDF)
Our goal is to enable preemptive multitasking in critical embedded systems programmed with the formally defined Lustre programming language, in order to get rid of the constraints imposed by static scheduling, which is still in use at this day. We aim to do so by introducing a tiny system layer between the hardware and the software, which would also be written as much as possible in Lustre to allow global validation of the system using formal verification methods.