# Next session

David J. Wu (University of Virginia)

July 25 , 2019 at 13h30-15h00, M.315 (3rd floor, Monod)

New Constructions of Reusable Designated-Verifier NIZKs

Non-interactive zero-knowledge arguments (NIZKs) are important cryptographic primitives that has received extensive study. Today, we have many types of constructions relying on different abstractions and from a broad range of assumptions like factoring, pairing-based assumptions, and lattice-based assumptions. In this work, we study whether there are generic frameworks for building NIZKs.

We specifically consider the setting of designated-verifier NIZKs (DV-NIZKs), where a trusted setup generates a common reference string together with a secret key for the verifier. We seek reusable schemes where the verifier can reuse the secret key to verify many different proofs and soundness should hold against provers who can learn whether various proofs are accepted or rejected. In this talk, I will describe a generic approach for constructing reusable designated-verifier NIZKs based on Sigma protocols and any single-key attribute-based encryption (ABE) scheme satisfying a weak function-hiding property. Our framework can be instantiated from most algebraic assumptions known to imply public-key encryption, including the CDH, LWE, and LPN assumptions. Notably, our work gives the first reusable DV-NIZK construction from LPN.

Finally, I describe an extension of our framework to obtain the stronger notion of malicious-designated-verifier NIZKs where the only trusted setup consists of a common random string, and there is a separate untrusted setup where the verifier chooses its own public/secret key needed to generate/verify proofs, respectively. Our generic framework yields new constructions of malicious DV-NIZKs from the same assumptions as above.

Joint work with Alex Lombardi, Willy Quach, Ron D. Rothblum, and Daniel Wichs

# Past sessions

Ron Steinfeld (Monash University)

June 27 , 2019 at 13h30-15h00, M.315 (3rd floor, Monod)

Lattice-Based Zero-Knowledge Proofs: New Techniques for Shorter and Faster Constructions and Applications

We devise new techniques for design and analysis of efficient lattice-based zero-knowledge proofs (ZKP). First, we introduce one-shot proof techniques for non-linear polynomial relations of degree k ≥ 2, where the protocol achieves a negligible soundness error in a single execution, and thus performs significantly better in both computation and communication compared to prior protocols requiring multiple repetitions. Such proofs with degree k ≥ 2 have been crucial ingredients for important privacy-preserving protocols in the discrete logarithm setting. Moreover, we introduce two speedup techniques for lattice-based ZKPs: a CRT-packing technique supporting “inter-slot” operations, and “NTTfriendly” tools that permit the use of fully-splitting rings. The former technique comes at almost no cost to the proof length, and the latter one barely increases it, which can be compensated for by tweaking the rejection sampling parameters while still having faster computation overall. To illustrate the utility of our techniques, we show how to use them to build efficient relaxed proofs for important relations, namely proof of commitment to bits, one-out-of-many proof, range proof and set membership proof. Despite their relaxed nature, we further show how our proof systems can be used as building blocks for advanced cryptographic tools such as ring signatures. Our ring signature achieves a dramatic improvement in length over all the existing proposals from lattices at the same security level. The computational evaluation also shows that our construction is highly likely to outperform all the relevant works in running times.

Amin Sakzad (Monash University)

June 20 , 2019 at 13h30-15h00, M.315 (3rd floor, Monod)

Public-Key Puncturable Encryption: Modular and Compact Constructions

We revisit the method of designing public-key puncturable encryption schemes and present a generic conversion by leveraging the techniques of distributed key-distribution and revocable encryption. In particular, we first introduce a refined version of identity-based revocable encryption, named key-homomorphic identity-based revocable key encapsulation mechanism with extended correctness. Then, we propose a generic construction of puncturable key encapsulation mechanism from the former by merging the idea of distributed key-distribution. Compared to the state-of-the-art, our generic construction does not suffer from non- negligible correctness error, and results in a variety of efficient schemes with distinct features. More precisely, we obtain the first scheme with very compact ciphertexts, and the first scheme with support for both unbounded size of tags per ciphertext and unbounded punctures as well as constant-time puncture operation. Moreover, we get a comparable scheme proven secure under the standard DBDH assumption, which enjoys both faster encryption and decryption than previous works based on the same assumption, especially when the number of tags associated with the ciphertext is large.

Changmin Lee (ENS Lyon)

June 13 , 2019 at 13h30-15h00, M.315 (3rd floor, Monod)

Module LLL

The LLL algorithm takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors. We provide a generalization to R-modules contained in Kn for arbitrary number fields K and dimension n, with R denoting the ring of integers of K. Concretely, we introduce an algorithm that efficiently finds short vectors in rank-n modules when given access to an oracle that finds short vectors in rank-2 modules, and an algorithm that efficiently finds short vectors in rank-2 modules given access to a Closest Vector Problem oracle for a lattice that depends only on K. The second algorithm relies on quantum computations and its analysis is heuristic.

Kévin Carrier (Pierre & Marie Curie University)

June 6 , 2019 at 13h30-15h00, M.315 (3rd floor, Monod)

Near-collision problem and Generic Decoding

Code-based cryptography relies on the difficulty of decoding a linear code. This problem has been thoroughly studied for more than 60 years and the best algorithms for solving it are exponential in the codelength. The issue in this area is to lower the corresponding exponent. The latest improvements all rely on using algorithms for finding pairs of points that are close to each other in a list. This is the so called near-collision problem. The best algorithms for finding almost all pairs at some distance $d$ have complexity of order $L^{\rho+o(1)}$ where $L$ is the size of the list and $\rho$ depends on $d$ and $L$. We give a unified and code-based treatment of this problem by showing that a common algorithm based on appropriate rate-distortion codes yields the best known exponent $\rho$. By a careful study of this algorithm we show that in the case of a list of random elements, this complexity is of order the number of solutions to this problem when $d$ gets large enough. By using this algorithm in the Both and May approach for decoding binary or non-binary linear codes we improve the best known decoding exponent for both setting. The complexity of the algorithm finding near-collisions we are able to analyze has a subexponential overhead $L^{o(1)}$ in front of the $L^\rho$ term when $L$ is of exponential size in the codelength (this is typically the relevant size for $L$ in cryptographic applications). This induces a subexponential overhead in front of the exponential term for the associated decoding algorithm. This was also the case for the Both-May-Ozerov approach for decoding binary linear codes. This limits significantly the usefulness of the improvement obtained on the decoding exponent by this kind of near-collision finding technique for cryptographic code sizes. However this subexponential overhead is really due to the codes we use for being able to analyze the algorithm, i.e. they are as in the lattice setting product codes. Experimental evidence shows that polar codes might be used in this setting and seem to significantly decrease this subexponential overhead. Moreover we show that this rate-distortion technique for solving the near-collision problem is not limited to finding close pairs, a slight modification of the technique is also able to tackle the problem of finding the far away pairs. We show how this can be used to improve upon decoding algorithms aiming at finding the furthest/a far away codeword. Finally, our near-collisions search method is not inherent to the Hamming space model. It can also be used in other metric spaces such as the Euclidean sphere. This model is particularly relevant in the lattice setting. For instance, our method can also be applied to the sieving techniques that are used for solving the Shortest Vector Problem (SVP) or the Learning With Error problem (LWE) that are some major problems in Lattice-Based Cryptography. But in this case, it is essentially equivalent to the technique used there.

Thomas Debris (Pierre & Marie Curie University)

May 16, 2019 at 13h30-15h00, M.315 (3rd floor, Monod)

Wave: a New Family of Trapdoor One-Way PSF Based on Codes

We present here a new family of trapdoor one-way Preimage Sampleable Functions (PSF) based on codes, the Wave-PSF family. The trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized $\UV$-codes. Our proof follows the GPV strategy \cite{GPV08}. By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash lemma. We instantiate the new Wave-PSF family with ternary generalized $\UV$-codes to design a “hash-and-sign” signature scheme which achieves {\em existential unforgeability under adaptive chosen message attacks} (EUF-CMA) in the random oracle model. For 128 bits of classical security, signature sizes are in the order of 13 thousand bits, the public key size in the order of 3 megabytes, and the rejection rate is limited to one rejection every 10 to 12 signatures.

Tale Kalachi Herve (ENS Lyon)

May 9, 2019 at 13h30-15h00, 116 (1st floor, Monod)

On the security of some variants of the GPT Cryptosystem

The first use of Gabidulin codes in cryptography was proposed in by Gabidulin, Paramonov and Tretjakov in 1991 and is now called the GPT cryptosystem. This scheme can be seen as an analogue of the McEliece public key cryptosystem based on the class of Gabidulin codes. But the GPT cryptosystem has been the subject to several attacks. Several works then proposed to resist to the usual attacks either by taking special distortion matrix, or by taking a column scrambler matrix defined over an extension field.

In this presentation, we show that all these variants of the GPT cryptosystem (proposed between the years 2008 and 2014) that aim to resist to Overbeck’s structural attack are actually still vulnerable.

This is a joint work with Ayoub Otmani (LITIS, University of Rouen-Normandie) and Selestin Ndjeya (Department of Mathematics, Higher Teacher Training College of Yaounde).

Ida Tucker (ENS Lyon)

April 18, 2019 at 13h30-15h00, 116 (1st floor, Monod)

Two-Party ECDSA from Hash Proof Systems and Efficient Instantiations

ECDSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell \cite{C:Lindell17} recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier’s cryptosystem.

In this talk I will show a generalization of Lindell’s solution using hash proof systems. The main advantage of our generic method is that it results in a simulation-based security proof without resorting to non-standard interactive assumptions.

Time permitting, I will also describe how to instantiate our framework using class groups of imaginary quadratic fields. Our implementations show that the practical impact of dropping such interactive assumptions is minimal. Indeed, while for 128-bit security our scheme is marginally slower than Lindell’s, for 256-bit security it turns out to be better both in key generation and signing time. Moreover, in terms of communication cost, our implementation significantly reduces both the number of rounds and the transmitted bits without exception.

Brice Minaud (ENS Paris)

April 11, 2019 at 13h30-15h00, 116 (1st floor, Monod)

Searchable Encryption, Leakage-Abuse Attacks, and Statistical Learning Theory

In this talk, I will review some recent advances in Searchable Encryption. I will focus especially on the reconstruction of encrypted databases using the leakage of searchable encryption schemes. We will see that this problem is closely related to statistical learning theory. Using this new viewpoint, I will present attacks that approximately reconstruct the contents of an entire database using only the access pattern leakage of range queries, a minimal type of leakage present in all practical constructions today.

Emmanouil Doulgerakis (TU Eindhoven)

April 4, 2019 at 13h30-15h00, 115 (1st floor, Monod)

Voronoi cells and the closest vector problem

In this talk we will first try to give a brief introduction to lattices and mention two main problems we want to solve concerning lattices, the shortest vector problem (SVP) and the closest vector problem (CVP). Then we will expand on Voronoi cells and how we can use them in order to solve the closest vector problem. It will be mentioned what is the Voronoi cell of a lattice, how we can “find” it and how we can use it in order to solve CVP. Afterwards I will move to more technical details about our recent result [DLW19] on the area. Finally we will mention future work/related work in the framework of using Voronoi cells.

Amit Deo (Royal Holloway University of London)

March 28, 2019 at 13h30-15h00, 116 (1st floor, Monod)

Cold boot attacks on R/MLWE schemes under the NTT

In this talk we will describe a cold boot attack on ring and module LWE (R/MLWE) schemes that use the number theoretic transform (NTT) when storing secret keys in RAM. Cold boot attacks have been studied in detail for widely used primitives such as AES and RSA, but relatively little attention has been given to post-quantum primitives. Motivated by the recent post-quantum standardisation efforts of NIST, we investigate cold boot attacks on practical implementations of R/MLWE schemes.

Informally, a cold boot attack entails an attacker with physical access to a victim’s machine obtaining a noisy dump of the memory. Assuming that there is secret key material stored in memory, the attacker acquires a noisy version of this secret key material where some bits have been flipped (in unknown positions). In this talk, we will be presenting one way in which an attacker can correct these bit flips for R/MLWE NTT keys using a divide and conquer approach along with lattice reduction. We will also discuss the attack complexities (obtained by running experiments using FPLLL) for R/MLWE parameters used in the New Hope and Kyber KEMs.

Jan-Pieter D’Anvers (KU Leuven)

March 7, 2019 at 13h30-15h00, 116 (1st floor, Monod)

The impact of decryption failures on the security of LWE/LWR based schemes

Numerous post-quantum encryption schemes in the NIST Post-Quantum Process have a small probability for decryption failures, in which the decryptor is not able to recover the message from the ciphertext. This is for example the case for schemes based on NTRU, Learning With Errors (LWE) and Mersenne prime schemes. Allowing these decryption failures, benefits the parameter setting of these schemes, leading to smaller communication overhead and a lower complexity. However, these decryption failures might leak information about the secret to an adversary. Thus, a good understanding of the impact of decryption failure attacks is of utmost importance when designing these post-quantum schemes.

In this talk we will discuss a method to increase the failure rate of LWE based schemes called failure boosting and a technique to estimate the secret from these decryption failures. We will also explore techniques to reduce the failure rate (such as error correcting codes), and possible complications of these techniques. We will have a strong focus on open problems and possible improvements to the state-of-the-art.

Léo Colisson (ENS Paris)

March 7, 2019 at 15h30-16h30, M7 315 (3rd floor, Monod)

Classical simulation of quantum channel, and applications to blind quantum computing

In the (near?) future, it is likely that a few quantum computers will be created, and will be remotely available to users so that they can run any computation on them. However, lot’s of quantum protocols (like blind quantum computing, 2-party computation…) require a quantum channels between parties… and creating a world-wide quantum network is very hard to do (and expensive). In this talk, I will discuss about our method to classically simulate a quantum channel in order to remove the requirement of a quantum channel, and we will see some possible applications of this primitive (among them classical-client (verifiable) blind quantum computing, classical-client quantum-2 party-computation…). The talk will be organized mostly into two independent sections, the first section will be about the actual protocol/proof/application we propose and the second one will describe how to build the special post-quantum cryptographic primitive required in our protocol, whose security is based on a reduction to the Learning-With-Error (LWE) problem.

Pre-print: On the possibility of classical client blind quantum computing, https://arxiv.org/abs/1802.08759

Romain Gay (ENS Paris)

March 14, 2019 at 13h30-15h00, 116 (1st floor, Monod)

New Techniques for Efficient Trapdoor Functions and Applications

We develop techniques for constructing trapdoor functions (TDFs) with short image size and advanced security properties. Our approach builds on the recent framework of Garg and Hajiabadi [CRYPTO 2018]. As applications of our techniques, we obtain:

– The first construction of deterministic-encryption schemes for block-source inputs (both for the CPA and CCA cases) based on the Computational Diffie-Hellman (CDH) assumption. Moreover, by applying our efficiency-enhancing techniques, we obtain CDH-based schemes with ciphertext size linear in plaintext size.

– The first construction of lossy TDFs based on the Decisional Diffie-Hellman (DDH) assumption with image size linear in input size, while retaining the lossiness rate of [Peikert-Waters STOC 2008].

Prior to our work, all constructions of deterministic encryption based even on the stronger DDH assumption incurred a quadratic gap between the ciphertext and plaintext sizes. Moreover, all DDH-based constructions of lossy TDFs had image size quadratic in the input size.

At a high level, we break the previous quadratic barriers by introducing a novel technique for encoding input bits via hardcore output bits with the use of erasure-resilient codes. All previous schemes used group elements for encoding input bits, resulting in quadratic expansions.

Geoffroy Couteau (KIT)

February 21, 2019 at 13h30-15h00, 116 (1st floor, Monod)

Designated-verifier pseudorandom generators, and their applications

We provide a generic construction of non-interactive zeroknowledge (NIZK) schemes. Our construction is a refinement of Dwork and Naor’s (FOCS 2000) implementation of the hidden bits model using verifiable pseudorandom generators (VPRGs). Our refinement simplifies their construction and relaxes the necessary assumptions considerably. As a result of this conceptual improvement, we obtain interesting new instantiations:

– A designated-verifier NIZK (with unbounded soundness) based on the computational Diffie-Hellman (CDH) problem. If a pairing is available, this NIZK becomes publicly verifiable. This constitutes the first fully secure CDH-based designated-verifier NIZKs (and more generally, the first fully secure designated-verifier NIZK from a nongeneric assumption which does not already imply publicly-verifiable NIZKs), and it answers an open problem recently raised by Kim and Wu (CRYPTO 2018).

– A NIZK based on the learning with errors (LWE) assumption, and assuming a non-interactive witness-indistinguishable (NIWI) proof system for bounded distance decoding (BDD). This simplifies and improves upon a recent NIZK from LWE that assumes a NIZK for BDD (Rothblum et al., PKC 2019).

Alonso Gonzalez (AriC)

February 7, 2019 at 13h30-15h00, 117 (1st floor, Monod)

Shorter Ring Signatures from Standard Assumptions

Ring signatures, introduced by Rivest, Shamir and Tauman (ASIACRYPT 2001), allow to sign a message on behalf of a set of users while guaranteeing authenticity and anonymity. Groth and Kohlweiss (EUROCRYPT 2015) and Libert et al. (EUROCRYPT 2016) constructed schemes with signatures of size logarithmic in the number of users. An even shorter ring signature, of size independent from the number of users, was recently proposed by Malavolta and Schroder (ASIACRYPT 2017). However, all these short signatures are obtained relying on strong and controversial assumptions. Namely, the former schemes are both proven secure in the random oracle model while the later requires non-falsifiable assumptions.

The most efficient construction under mild assumptions remains the construction of Chandran et al. (ICALP 2007) with a signature of size Theta(sqrt{n}), where n is the number of users, and security based on the Diffie-Hellman assumption in bilinear groups (the SXDH assumption in asymmetric bilinear groups).

In this work we construct an asymptotically shorter ring signature from the hardness of the Diffie-Hellman assumption in bilinear groups. Each signature comprises Theta(sqrt[3]{n}) group elements, signing a message requires computing Theta(sqrt[3]{n}) exponentiations, and verifying a signature requires Theta(n^{2/3}) pairing operations. To the best of our knowledge, this is the first ring signature based on bilinear groups with o(sqrt{n}) signatures and sublinear verification complexity.

Bruno Salvy (AriC)

January 31, 2019 at 13h30-15h00, 116 (1st floor, Monod)

Gröbner bases and polynomial elimination

Gröbner bases are a classical tool for the analysis of solutions of polynomial systems.

There are several ways to look at them, depending on whether one is interested in their expressivity, their worst-case complexity, or their actual behaviour on regular problems. We will give an elementary overview of these questions, with more or less details depending on the needs of the audience.

Chen Qian (ENS de Lyon & U. Rennes)

January 24, 2019 at 13h30-15h00, 116 (1st floor, Monod)

Lossy Algebraic Filters With Short Tags

Lossy algebraic filters (LAFs) are function families where each function is parametrized by a tag, which determines if the function is injective or lossy. While initially introduced by Hofheinz (Eurocrypt 2013) as a technical tool to build encryption schemes with key-dependent message chosen-ciphertext (KDM-CCA) security, they also find applications in the design of robustly reusable fuzzy extractors. So far, the only known LAF family requires tags comprised of Θ(n^2) group elements for functions with input space Z^n_p , where p is the group order. In this paper,we describe a new LAF family where the tag size is only linear in n and prove it secure under simple assumptions in asymmetric bilinear groups. Our construction can be used as a drop-in replacement in all applications of the initial LAF system. In particular, it can shorten the ciphertexts of Hofheinz’s KDM-CCA-secure public-key encryption scheme by 19 group elements. It also allows substantial space improvements in a recent fuzzy extractor proposed by Wen and Liu (Asiacrypt 2018). As a second contribution, we show how to modify our scheme so as to prove it tightly secure, meaning that security reductions are not affected by a concrete security loss proportional to the number of adversarial queries.

Cong Ling (Imperial College London)

January 17, 2019 at 13h30-15h00, AmphiB (3rd floor, Monod)

Algebraic Lattice Codes for Fading Wireless Channels: From Fermat to Shannon

Algebraic number theory has emerged as a new foundation of modern coding theory, due to its connection with Euclidean lattices. In wireless communications, it is the key mathematic tool to construct powerful error-correction codes over mobile fading channels. This talk presents an overview of the constructions of codes from commutative and non-commutative rings for fading and MIMO (multi-input multi-output) channels, respectively, and introduces a novel framework to achieve the Shannon capacity of these channels. If time permits, a glimpse at the applications to error correction in lattice-based cryptography will be given.

Thomas Prest (PQ Shield)

December 13, 2018 at 13h30-15h00, B2 (4th floor, Monod)

TBA

TBA

Benjamin Wesolowski (EPFL)

December 6, 2018 at 13:30-15:00, AmphiB (3rd floor, Monod)

Horizontal isogeny graphs

An isogeny graph is a graph whose vertices represent abelian varieties, and edges represent isogenies between them. They are an important tool to study the discrete logarithm problem on these abelian varieties, and more recently they have been used to construct promising post-quantum public key cryptosystems. We study the “horizontal” structure of these graphs for ordinary abelian varieties of arbitrary dimension. We derive new bounds to obtain connected or expander graphs were all isogenies are computable, and deduce some consequences regarding the difficulty of the discrete logarithm problem.

Changmin Lee (AriC)

November 29, 2018 at 12:30-14:00, M7 315 (3rd floor, Monod)

Statistical Zeroizing Attack: Cryptanalysis of Candidates of BP Obfuscation over GGH15 Multilinear Map

We introduce a new type of cryptanalytic algorithm on the obfuscations based on branching programs over GGH15 multilinear map.

Our strategy is to reduce the security problem of indistinguishability obfuscation into the distinguishing problem of two distributions where polynomially many samples are given. More precisely, we perform the obfuscating process ourselves with randomly chosen secret values to obtain identical and independent samples according to the distribution of evaluations of obfuscations. We then use the variance of samples as a new distinguisher of two functionally equivalent obfuscated programs.

Alain Passelègue (ARiC)

November 22, 2018 at 13:30-15:00, Amphi A (3rd floor, Monod)

New candidate PRFs and their applications

In this talk, I will present new and simple candidate PRFs introduced in a recent work. In this work, we depart from the traditional approaches for building PRFs used in provable security or in applied cryptography by exploring a new space of plausible PRF candidates. Our guiding principle is to maximize simplicity while optimizing complexity measures that are relevant to advanced cryptographic applications. Our primary focus is on weak PRFs computable by very simple circuits (depth-2 ACC circuits).

The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw many natural connections between their hardness and questions in complexity theory or learning theory. On the applied side, the piecewise-linear structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). In particular, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity compared to protocols obtained by combining standard MPC protocols with practical PRFs (included MPC-friendly ones).

Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We finally give a candidate EI-PRF based on our main weak PRF candidate.

Joint work with Dan Boneh, Yuval Ishai, Amit Sahai, and David J. Wu, published at TCC 2018

Markku-Juhani O. Saarinen (PQShield, Oxford UK)

November 15, 2018 at 12:30-14:00, seminar room M7 (3rd floor, Monod)

On Contemporary Cryptographic Engineering: Cryptographic Design, Engineering Challenges, Implementation Attacks

I will start with a general introduction to cryptographic design and engineering, highlighting its two biggest current challenges; lightweight cryptography and post-quantum cryptography. In the second part of the talk I will focus on specific intersection of the two; implementation of the Round5 post-quantum encryption algorithm on lightweight IoT targets.

The NIST Post-Quantum Cryptography (PQC) Project is an ongoing effort to identify and standardize new public key cryptographic algorithms based on hard problems that are not easily solvable by quantum computers. However post-quantum algorithms are as vulnerable to implementation attacks as classical asymmetric cryptographic standards such as those based on RSA and Elliptic Curve Discrete Logarithm problems. I will describe some (unpublished) implementation attacks against NIST PQC candidates and give pointers to security analysis tools and implementation best practices.

Round5 is a Public Key Encryption and Key Encapsulation Mechanism (KEM) based on General Learning with Rounding (GLWR), a lattice problem. Round5 is a merger of two lattice-based first round NIST PQC proposals, Round2 from Philips Hila5 designed by me. Round5 incorporates GLWR with forward error correction. The Round5 team has put a lot of effort into algorithmic optimization. We argue that the current ring-based development version of Round5 offers not only the shortest key and ciphertext sizes among Lattice-based candidates, but also has leading performance and implementation size characteristics across a wide spectrum of implementation targets.

Jiaxin Pan (Karlsruher Institut für Technologie)

November 7, 2018 at 9:00am-10:30 am, seminar room M7 (3rd floor, Monod)

More Efficient (Almost) Tightly Secure Structure-Preserving Signatures

We provide a structure-preserving signature (SPS) scheme with an (almost) tight security reduction to a standard assumption. Compared to the state-of-the-art tightly secure SPS scheme of Abe et al. (CRYPTO 2017), our scheme has smaller signatures and public keys (of about 56%, resp. 40% of the size of signatures and public keys in Abe et al.’s scheme), and a lower security loss (of O(logQ) instead of O(λ), where λ is the security parameter, and Q=poly(λ) is the number of adversarial signature queries).

While our scheme is still less compact than structure-preserving signature schemes without}tight security reduction, it significantly lowers the price to pay for a tight security reduction. In fact, when accounting for a non-tight security reduction with larger key (i.e., group) sizes, the computational efficiency of our scheme becomes at least comparable to that of non-tightly secure SPS schemes. Technically, we combine and refine recent existing works on tightly secure encryption and SPS schemes. Our technical novelties

include a modular treatment (that develops an SPS scheme out of a basic message authentication code), and a refined hybrid argument that enables a lower security loss of O(logQ) (instead of O(λ)).

This is a joint work with Romain Gay, Dennis Hofheinz, and Lisa Kohl.

Gregor Seiler (ETH, Zurich)

October 25, 2018 at 10:00am-11:30 am, seminar room M7 (3rd floor, Monod)

Lattice-Based Group Signatures and Zero-Knowledge Proofs of Automorphism Stability.

In this talk I am going to present our recent CCS paper proposing a practical lattice-based group signature scheme and a new zero-knowledge proof system. The outputs of our group signature scheme are more than an order of magnitude smaller than the previously most efficient schemes in the literature. This is made possible by a new zero-knowledge proof system for proving that a committed value belongs to a particular set of small size. The sets for which our proofs are applicable are those consisting of elements that remain stable under Galois automorphisms of the underlying cyclotomic number field of our lattice-based protocol.

Shweta Agrawal (IIT Madras)

October 19, 2018 at 10:00am-11:30 am, seminar room 116 (1st floor, Monod)

Ad Hoc Multi Input Functional Encryption.

Consider sources that supply sensitive data to an aggregator. Standard encryption only hides the data from eavesdroppers, but using specialized encryption one can hope to hide the data (to the extent possible) from the aggregator itself. For flexibility and security, we envision schemes that allow sources to supply encrypted data, such that at any point a dynamically-chosen subset of sources can allow an agreed-upon joint function of their data to be computed by the aggregator. A primitive called multi-input functional encryption (MIFE), due to Goldwasser et al. (EUROCRYPT 2014), comes close, but has two main limitations:

– It requires trust in a third party, who is able to decrypt all the data.

– It requires function arity to be fixed at setup time and to be equal to the number of parties.

To drop these limitations, we introduce a new notion of ad hoc MIFE. In our setting, each source generates its own public key and issues individual, function- specific secret keys to an aggregator. For successful decryption, an aggregator

must obtain a separate key from each source whose ciphertext is being computed upon.

We show that ad hoc MIFE for any functionality is implied by standard MIFE for that functionality and a special type of two-round secure multiparty computation (MPC) protocol. For general functions, the desired MPC protocol may itself be constructed using standard MIFE,which implies that standard MIFE can be bootstrapped to ad hoc MIFE for free. We also show a more efficient construction of ad hoc MIFE for the inner product functionality using single- input FE for inner products and a special type of two-round MPC protocol for inner products, both of which are realizable from the standard Learning with Errors (LWE) assumption.

Gong Junqing (ENS Lyon)

September 27, 2018 at 1:30pm-3 pm, room 117 seminar room (1st floor, Monod)

Improved Inner-product Encryption with Adaptive Security and Full Attribute-hiding

In this work, we propose two IPE schemes achieving both adaptive security and full attribute-hiding in the prime-order bilinear group. These improve upon the unique existing result satisfying both features by Okamoto and Takashima [Eurocrypt ’12] in terms of efficiency:

– Our first IPE scheme is based on the standard k-Lin assumption and has shorter master public key and shorter secret keys than Okamoto and Takashima’s IPE under DLIN = 2-Lin assumption.

– Our second IPE scheme is adapted from the first one; it also enjoys shorter ciphertexts under XDLIN assumption (as Okamoto and Takashima’s IPE).

Technically, instead of starting from composite-order IPE and applying existing transformation, we start from an IPE scheme in a very restricted setting but already in the prime-order group, and then gradually upgrade it to our full-fledged IPE scheme. This method allows us to integrate Chen et al.’s framework [Eurocrypt ’15] with recent new techniques [TCC ’17, Eurocrypt ’18] in an optimized way.

Based on joint work with Jie Chen and Hoeteck Wee.

Elena Kirshanova (ENS Lyon)

September 20, 2018 at 1:30pm-3 pm, room M7 seminar room (3rd floor, Monod)

The General Sieve Kernel and New Records in Lattice Reduction

In this talk, an ongoing work on implementation of sieving algorithms is presented. Joint work with Martin R. Albrecht, Leo Ducas, Gottfried Herold, Eamonn W.~Postlethwaite, Marc Stevens

Monosij Maitra (IIT Madras)

September 13, 2018 at 1:30pm-3 pm, room M7 seminar room (3rd floor, Monod)

Functional Encryption and Indistinguishability Obfuscation for Turing Machines from Minimal Assumptions

Functional Encryption (FE), being a generalisation of public key encryption (PKE), aims to give out secret keys corresponding to functions f and ciphertexts correspond to messages m from the domain M of f. Given such a function key and an FE ciphertext CT(m), it enables the decryptor to learn f(m) and nothing else. Program obfuscation aims to alter a program into an unintelligible one such that its functionality is still preserved. Indistinguishability obfuscation (iO), being a weaker version of this notion, requires that obfuscation of any two functionally equivalent circuits of same size are computationally indistinguishable.

FE for circuits with some special properties has been shown to imply iO for circuits although iO is widely believed to be a sub-exponential assumption inherently. Thus, replacing applications of iO by cryptographic primitives which are not believed to contain inherent sub-exponential hardness has garnered a lot of interest recently. On the other hand, modelling functions as Turing machines instead of circuits provides direct advantages like supporting unbounded inputs, fixed description size and input specific runtimes. In this context we show the following feasibility results:

1. We construct iO for Turing machines with bounded inputs from the same assumptions as required in the circuit model, namely, sub-exponentially secure FE for circuits. The previous best constructions require sub-exponentially secure iO for circuits.

2. We provide a new construction of single input FE for Turing machines with unbounded length inputs and optimal parameters from polynomially secure, compact FE for circuits. The previously best known construction [AS16] relies on iO for circuits.

3. We provide a new construction of Multi-input FE for Turing machines. Our construction supports a fixed number of encryptors (say k), who may each encrypt a string x_i of unbounded length. We rely on sub-exponentially secure FE for circuits, while the only previous construction [BGJS15] relies on a strong knowledge type assumption, namely, public coin differing inputs obfuscation (pc-diO).

Our results require a mild assumption of decomposability on the underlying circuit FE schemes. Roughly speaking, a decomposable FE scheme asserts a long string to be encrypted bit-by-bit, using shared randomness across bits. We also show how to build such a decomposable FE scheme for circuits, given any generic circuit FE scheme and a decomposable Randomized Encoding scheme. This property is already satisfied by almost all existing circuit FE schemes to the best of our knowledge. Our techniques are new and from first principles, and avoid usage of sophisticated iO specific machinery that were used in all relevant prior work.

Joint work with Shweta Agrawal.