# Tuesday December 10, 2024

# Thursday December 5, 2024

Proof systems such as SNARK can prove that the verification of a signature (or any other evaluation of an algorithm) succeeded. To ease the proof arithmetic, the scalar field of the group should become the base field where the statement takes place. In practice it corresponds to finding prime-order elliptic curves over a given base field, named embedded curves. $\newline$ Based on the CM method for primality testing (ECPP) by Atkin and Morain published in 1993, we present two algorithms: one to generate embedded elliptic curves of SNARK-friendly curves, with a variable discriminant $D$; and another to generate families (parameterized by polynomials) with a fixed discriminant $D$. When $D = 3$ mod $4$, it is possible to obtain a prime-order curve, and form a cycle. We apply our technique first to generate more embedded curves like Bandersnatch with BLS12-381 and we propose a plain twist-secure cycle above BLS12-381 with $D = 6673027$. We also devise about the scarcity of Bandersnatch-like CM curves, and show that with our algorithm, it is only a question of core-hours to find them. Second, we obtain families of prime-order embedded curves of discriminant $D = 3$ for BLS and KSS18 curves. Our method obtains families of embedded curves above KSS16 and can work for any KSS family. Our work generalizes the work on Bandersnatch (Masson, Sanso, and Zhang, and Sanso and El Housni).

# Tuesday November 26, 2024

For an elliptic curve $E$, Elkies’ improvement to Schoof’s algorithm involves computing the trace of Frobenius modulo primes $\ell$ for which $E$ has a rational $\ell$-isogeny. In practice, this gives a substantial speedup, but heuristics beyond GRH are required to prove that it yields an asymptotic speedup. Computing the trace of an arbitrary endomorphism can be done with a generalization of Schoof’s algorithm. When $E$ is supersingular, computing the trace of an endomorphism is an important subroutine in various algorithms for computing the endomorphism ring of $E$. And in the supersingular case, assuming $E$ is defined over the finite field of $p^{2}$ elements, $E$ always has rational $\ell$-isogenies – we leverage this in a generalization of the SEA algorithm for supersingular endomorphisms, yielding an unconditional asymptotic speedup over the generalization of Schoof’s algorithm. We gain further speedups in practice: for example, we can compute the trace modulo $p$ (the characteristic) by computing the action on an invariant differential of $E$, and we can leverage knowledge of the number of points of $E$ since $E$ is supersingular to compute the trace modulo primes dividing $p\pm 1$. This is joint work with Lorenz Panny, Jana Sotakova, and Michael Wills.

# Thursday September 26, 2024

We analyze the classical time and space complexity of solving multidimensional Discrete Logarithm Problems (DLPs) and group action inversions in very high dimensions. To achieve this, we adapt the Gaudry-Schost algorithm, ensuring correctness in higher-dimensional settings. Although our adaptation increases the computational cost, we believe we maintain constant memory usage for a fixed dimension. We provide a detailed complexity analysis of our new algorithm and compare it to the original. A proof-of-concept demonstrates the algorithm’s behavior across various scenarios, with a focus on its application to group action-based cryptosystems.

# Thursday May 30, 2024

The impossible differential (ID) attack is one of the most important cryptanalytic techniques for block ciphers. There are two phases to finding an ID attack: searching for the distinguisher and building a key recovery upon it. Previous works only focused on automated distinguisher discovery, leaving key recovery as a manual post-processing task, which may lead to a suboptimal final complexity. At EUROCRYPT 2023, Hadipour et al. introduced a unified constraint programming (CP) approach based on satisfiability for finding optimal complete ID attacks in strongly aligned ciphers. While this approach was extended to weakly-aligned designs like PRESENT at ToSC 2024, its application to ARX and AndRX ciphers remained as future work. Moreover, this method only exploited ID distinguishers with direct contradictions at the junction of two deterministic transitions. In contrast, some ID distinguishers, particularly for ARX and AndRX designs, may not be detectable by checking only the existence of direct contradictions.

This paper fills these gaps by extending Hadipour et al.’s method to handle indirect contradictions and adapting it for ARX and AndRX designs. We also present a similar method for identifying zero-correlation (ZC) distinguishers. Moreover, we extend our new model for finding ID distinguishers to a unified optimization problem that includes both the distinguisher and the key recovery for AndRX designs. Our method improves ID attacks and introduces new distinguishers for several ciphers, such as SIMON, SPECK, Simeck, ChaCha, Chaskey, LEA, and SipHash. For example, we achieve a one-round improvement in the ID attacks against SIMON-128-128 and SIMON-128-192. These results significantly contribute to our understanding of the effectiveness of automated tools in the cryptanalysis of different design paradigms.

# Monday May 13, 2024

In 1994, Langford and Hellman introduced differential-linear (DL) cryptanalysis with the idea of decomposing the block cipher E into two parts, EU and EL, such that EU exhibits a high-probability differential trail, while EL has a high-correlation linear trail. Combining these trails forms a distinguisher for E, assuming independence between EU and EL. The dependency between the two parts of DL distinguishers remained unaddressed until EUROCRYPT 2019, where Bar-On et al. introduced the DLCT framework, resolving the issue up to one S-box layer. However, extending the DLCT framework to formalize the dependency between the two parts for multiple rounds remained an open problem. This is while the boomerang attack, another combined attack, has been developed further, and there are some systematic methods to handle the dependency across multiple rounds.

This talk first reviews the basics of boomerang and differential-linear analyses. Next, we uncover some links between boomerang and DL analysis and show that most of the techniques developed in boomerang analysis can be projected into DL analysis to solve the open problem left by EUROCRYPT 2019. Then, we introduce a highly versatile and easy-to-use automatic tool for exploring DL distinguishers inspired by automatic tools for boomerang distinguishers. This tool considers the dependency between differential and linear trails across multiple rounds. Lastly, we present the application of our tool to several symmetric-key primitives, such as Ascon, AES, SERPENT, TWINE, CLEFIA, WARP, and Simeck.

# Thursday April 4, 2024

# Monday March 11, 2024

# Thursday February 15, 2024

The Number Field Sieve (NFS) is the state-of-the art algorithm for integer factoring, and sieving is a crucial step in the NFS. It is a very time-consuming operation. Its goal is to find as many couples of smooth integers on two sides, rational and algebraic. They are called relations. Sieving tries to find as many relations, that is to say prime decompositions, in as little time as possible.

In this talk, we try to examine different sieving strategies to speed up this specific step of the NFS. Based on the relations collected during the RSA-250 factorization and all parameters, we try to study different strategies to better understand this step. Many strategies have been defined since the discovery of NFS, and we provide here an experimental evaluation.

# Tuesday February 13, 2024

One of the most promising primitives for quantum-resistant cryptographic schemes rely on Euclidean lattices. For efficiency reasons, most schemes use lattices which enjoy an additional algebraic structure. Their security can be linked to problem of recovering short elements in ideals of a number field (Ideal-SVP) or in modules over the ring of integers of a number field (Module-SVP). Thus, Ideal-SVP has received sustained attention in the past few years.

In this talk, I will first give some background on Euclidean lattices, lattice-based cryptography and the different algebraic attacks on Ideal-SVP. Then I will talk about the work we did with Olivier Bernard, Thuong Huy Nguyen and Adeline Roux-Langlois on the possibility to solve Ideal-SVP over cyclotomic fields of large dimensions (published at AsiaCrypt 2022).

# Thursday January 25, 2024

The (Riemann) theta functions are analytic functions of several complex variables that are fundamental in the study of elliptic curves and abelian varieties. In algorithmic applications, one must be able to evaluate these theta functions at a given point at large precisions. In my talk, I will present a new evaluation algorithm for theta functions (joint with Noam D. Elkies) based on the duplication formula. This algorithm is uniformly quasi-linear in the required precision, works in every dimension, and its implementation in FLINT 3.1 is very efficient in practice.

# Thursday December 7, 2023

Recently, new symmetric primitives have been proposed for advanced protocols such as multi-party computation, in combination with fully homomorphic encryption, or in various zero-knowledge proof systems. These protocols have put forward the need to minimize the number of multiplications performed by the primitive in large finite fields. Classical symmetric algorithms are then inappropriate in this context, and the new cryptographic protocols have to be combined with symmetric primitives with particular properties. These new designs are called “Arithmetization-Oriented” (AO).

In this talk, we will first present a new vision for designing such primitives, exploiting a previously unknown link with the CCZ-equivalence. Besides, while the number of AO primitives is increasing significantly, only a few cryptanalysis works have been proposed. Therefore, we will also propose a security analysis of the MiMC block cipher, one of the first primitives proposed in this context, giving a detailed understanding of the evolution of the algebraic degree of this cipher.

# Thursday November 23, 2023

In 2010, Freeman, Scott, and Teske published a well-known taxonomy compiling the best known families of pairing-friendly elliptic curves. Since then, the research effort mostly shifted from the generation of pairing-friendly curves to the improvement of algorithms or the assessment of security parameters to resist the latest attacks on the discrete logarithm problem. Consequently, very few new families were discovered. However, the need of pairing-friendly curves of prime order in some new applications such as SNARKs has reignited the interest in the generation of pairing-friendly curves, with hope of finding families similar to the one discovered by Barreto and Naehrig.

Building on the work of Kachisa, Schaefer, and Scott, we show that some elements of extensions of a cyclotomic field have a higher probability of generating a family of pairing-friendly curves. We present a general framework which embraces the KSS families and many of the other families in the taxonomy paper, and provide an open-source SageMath implementation of our technique. We finally introduce a new family with embedding degree $k=20$ which we estimate to provide a faster Miller loop compared to KSS16 and KSS18 at the 192-bit security level.

# Thursday September 21, 2023

We discuss several recent applications of isogenies in dimension $>1$:

- breaking the SIKE post-quantum key exchange protocol
- new algorithmic tools for elliptic curves (endomorphism rings, canonical lifts, point counting…)
- new cryptosystems (post-quantum encryption, signature…)

If time permit, we will also discuss how efficient we can hope these higher dimensional isogenies to be.

# Thursday June 22, 2023

We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is $O(n^{\omega}(C+n)^{1+\varepsilon})$ for lattices of dimension $n$, $\omega\in (2,3]$ bounding the cost of size reduction, matrix multiplication, and QR factorization, and $C$ bounding the log of the condition number of the input basis $B$. This yields a running time of $O\left(n^\omega (p + n)^{1 + \varepsilon}\right)$ for precision $p = O(\log |B|_{max})$ in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations.

# Thursday May 11, 2023

Simon and Simeck are two lightweight block ciphers with a simple round function using only word rotations and a bit-wise AND operation. Previous work has shown a strong clustering effect for differential and linear cryptanalysis, due to the existence of many trails with the same inputs and outputs.

In this talk, I will explore this clustering effect by exhibiting a class of high probability differential and linear trails where the active bits stay in a fixed window of w bits. Instead of enumerating a set of good trails contributing to a differential or a linear approximation, we compute the probability distribution over this space, including all trails in the class.

This results in stronger distinguishers than previously proposed, and we describe key recovery attacks against Simon and Simeck improving the previous results by up to 7 rounds. In particular, we obtain an attack against 42-round Simeck64, leaving only two rounds of security margin, and an attack against 45-round Simon96/144, reducing the security margin from 16 rounds to 9 rounds.

# Thursday March 30, 2023

The Regular Syndrome Decoding (RSD) problem, a variant of the Syndrome Decoding problem with a particular error distribution, was introduced almost 20 years ago by Augot et al.. In this problem, the error vector is divided into equally sized blocks, each containing a single noisy coordinate. More recently, the last five years have seen increased interest in this assumption due to its use in MPC and ZK applications. Generally referred to as “LPN with regular noise” in this context, the assumption allows to achieve better efficiency when compared to plain LPN. In all previous works of cryptanalysis, it has not been shown how to exploit the special feature of this problem in an attack.

We present the first algebraic attack on RSD. Based on a careful theoretical analysis of the underlying polynomial system, we propose concrete attacks that are able to take advantage of the regular noise distribution. In particular, we can identify several examples of concrete parameters where our techniques outperform other algorithms.

# Thursday January 26, 2023

In this talk, I will present some recent work on Learning With Errors (LWE), a very important problem for the security of post-quantum cryptography. Specifically, I focus on so-called dual-attacks. First, I will explain how those attacks work and why they fundamentally reduce to a problem about the Discrete Fourier Transform (DFT). Then, I will explain how we obtain a quantum speed-up of those attacks by leveraging tools such as quantum amplitude estimation. Finally, I will explain how we can obtain classical speed-ups by using ideas from coding theory applied to lattices.

# Friday January 13, 2023

Modular polynomials are an important ingredient of the Schoof-Elkies-Atkin algorithm that computes the cardinality of elliptic curves over finite fields in large characteristic. They serve to identify and build isogenous curves. These polynomials have very large coefficients and various «small» families have been proposed. This includes Atkin’s canonical and star polynomials. These families come with algebraic formulas for auxiliary data needed to explicit isogenies between curves. In this talk, we review past propositions, sketch the algorithms for computing modular polynomials. In particular, we develop the methods specifically applied to the polynomials introduced by Charlap, Coley and Robbins, and discuss their merits.

# Thursday December 15, 2022

# Monday November 21, 2022

Computing isogenies between abelian varieties has numerous applications in cryptography and number theory. Such computations can prove very expensive as soon as one moves away from the elliptic curve case, i.e. from dimension 2 onwards. In this talk, I will show that approaching this problem by means of the analytic theory of theta functions still gives rise to practical algorithms. I will present a certified quasi-linear time algorithm to compute theta functions in any dimension, based on ideas due to Dupont and Labrande–Thomé. I will then describe its implementation and some previously inaccessible applications.

# Thursday November 10, 2022

In order to maintain a similar security level in a post-quantum setting, many symmetric primitives should have to double their keys and increase their state sizes. So far, no generic way for doing this is known that would provide convincing quantum security guarantees. In this talk, we propose a new generic construction that allows to double the key and the state size of a block cipher. For this we have modified the ECB-Mix-ECB (EME) construction, as we have been able to mount a new type of superposition attack on EME, and we provide several classical and quantum security arguments and analyses for our new construction QuEME. We propose a concrete instantiation of this construction with variants of AES-128.

# Thursday October 27, 2022

In symmetric cryptography, finding a differential of high probability for a given cipher and computing its exact probability is usually a hard problem. It requires a big amount of time and memory consumption. During this talk, we will see what is a differential attack, a way of looking for differentials using a MILP modelization technique and how to obtain an approximation of their probabilities.

# Thursday October 20, 2022

This talk focuses on the shortest vector problem over so-called ideal lattices. Whereas it is believed to be hard in the worst-case, prior works have shown that there are several specific cases, where the problem becomes easy to solve. We continue this line of work and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we extend it to any ideal (whose prime factors are not ramified) of any number field, generalizing the works of Pan et al. (Eurocrypt'21) and Porter et al. (ArXiv'21). We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time. As a proof of concept, we implemented our attack and managed to solve those particular instances for concrete parameter settings proposed in the literature. For random instances, we can halve the lattice dimension with non-negligible probability. We conclude the presentation by clarifying the impacts of our results to lattice-based cryptography. This is a joint work with Erell Gachon and Alice Pellet-Mary.

# Friday June 24, 2022

Elliptic curves have become key ingredients for instantiating zero-knowledge proofs and more generally proof systems. Recently, there have been many tailored constructions of these curves that aim at efficiently implementing different kinds of proof systems. This survey provides a comprehensive view on existing work and revisits the contributions in terms of efficiency and security. We present an overview at three stages of the process: curves to instantiate a SNARK, curves to instantiate a recursive SNARK, and also curves to express an elliptic-curve related statement. We provide new constructions of curves for SNARKs and generalize the state-of-the-art constructions for recursive SNARKs. We also exhaustively document the existing work and open-source implementations.

Joint work with Diego F. Aranha, Aarhus University, and Youssef El Housni, Consensys, GRACE, Ecole Polytechnique.

Preprint at hal-03667798 and ePrint 2022/586.

Python implementation at gitlab.

# Wednesday June 8, 2022

Floating-point numbers are only approximations of real values, but they are not always the correct approximations. The IEEE 754 standard defines the notion of correct rounding as the application of the selected rounding function to the infinitely precise value, thus, several functions are required to return the correctly rounded (addition, subtraction, etc). But other functions like pow are not required to return the correctly rounded result, this leads to inconsistencies between different mathematical libraries, different versions of the same libraries or even between different processors running the same executable. To resolve this issue and obtain reproducible computations, we continue and improve on the work of Ziv or C. Lauter and V. Lefèvre to design a correctly rounded pow function for double precision inputs in the context of the CORE-MATH project.

# Tuesday May 3, 2022

In cryptography, one can often define two versions of a hard problem: A computational version, or search version, that asks to compute something (e.g. a discrete logarithm, a prime factor), and a decisional version, that asks to distinguish between some specific probability distribution and the uniform. More often, the search problem can be shown to be computationally hard, but the decision version might be more difficult to handle. Informally, the hardness of the decision version means that the distribution is pseudorandom.

In code-based cryptography, one natural search version is the decoding problem, which has been shown to be equivalent to the decision version in the case of purely random codes. However, it is a long standing problem to find a search to decision reduction for structured versions of the decoding problem (e.g. the decoding of quasi-cyclic codes). In the lattice-based setting though, such reductions have been carried out using number fields for structured variants of the Learning With Errors problem: Polynomial-LWE, Ring-LWE, Module-LWE and so on.

In this talk I will present a function field version of the LWE problem that we called Function Field Decoding Problem. This new framework leads to another point of view on structured codes, strengthening the connection between lattice-based and code-based cryptography. In particular, it permits to obtain the first search to decision reduction for structured codes.

Following the historical constructions in lattice-based cryptography, this framework can be instantiated with function fields analogue of the cyclotomic number fields, namely Carlitz extensions, leading to search to decision reductions on various versions of Ring-LPN, which have applications to secure multi party computation, and to an authentication protocol.

This is a joint work with Alain Couvreur and Thomas Debris-Alazard (ePrint 2022/269)

# Tuesday April 5, 2022

We present new side-channel attacks on SIKE, the isogeny-based candidate in the NIST PQC competition. Previous works had shown that SIKE is vulnerable to differential power analysis and pointed to coordinate randomization as an effective countermeasure. We show that coordinate randomization alone is not sufficient, as SIKE is vulnerable to a class of attacks similar to refined power analysis in elliptic curve cryptography, named zero-value attacks. We describe and confirm in the lab two such attacks leading to full key recovery, and analyze their countermeasures.

# Friday February 25, 2022

Polynomial systems are ubiquitous in computational mathematics and have several applications in science and engineering. In this talk, I will present some linear-algebra-based strategies to compute with them. First, I will focus on solving, symbolically and numerically, structured systems such as sparse and multi-homogeneous. For that, I will merge classical tools, like Grobner bases and resultants, with Toric geometry. Later, I will present some applications of these techniques in different domains, such as differential equations and topological data analysis.

# Thursday December 16, 2021

We present a new data structure to approximate accurately and efficiently a polynomial f of degree d given as a list of coefficients. Its properties allow us to improve the state-of-the-art bounds on the bit complexity for the problems of root isolation and approximate multipoint evaluation. This data structure also leads to a new geometric criterion to detect ill-conditioned polynomials, implying notably that the standard condition number of the zeros of a polynomial is at least exponential in the number of roots of modulus less than 1/2 or greater than 2.

# Wednesday November 24, 2021

The security of modern cryptosystems relies on computational assumptions, which may be challenged by the advent of large-scale quantum computing devices. While Shor’s algorithm is known to break today’s most popular public-key schemes, secret-key cryptosystems are generally expected to retain half of their pre-quantum bits of security, corresponding to a (generic) quadratic time speedup.

In this talk, we will focus on key-recovery attacks against block ciphers. These attacks are often categorized in two scenarios, depending on the type of black-box access allowed to the adversary: either a classical query access, or a “quantum” query access where the black-box is part of the adversary’s quantum algorithm. In the first part of the presentation, we will see that most quantum attacks known so far actually comply with the rule of halving security levels, though in the “quantum” query model, some classically secure designs have shown to be broken due to their strong algebraic structure (Kuwakado & Morii, ISIT 2010).

In the second part, we will see how these vulnerabilities can lead to better attacks in the classical query model. In particular, we will present an extension of the offline-Simon algorithm (Bonnetain et al., ASIACRYPT 2019) that, against some specific block cipher constructions, reaches a more than quadratic speedup in key-recovery. This is a joint work with Xavier Bonnetain and Ferdinand Sibleyras ([ePrint 2021/1348](https://eprint.iacr.org/2021/1348.

# Tuesday June 29, 2021

# Friday June 25, 2021

# Monday June 14, 2021

In this talk, we will try to review the state-of-the-art of the algorithms for solving the Euclidean lattice problems underlying cryptography. In more details, this talk contains two parts. In the first part, we will focus on the lattice problems such as approximate Shortest Vector Problem (approx-SVP) and the lattice reduction algorithms as the best known solving algorithms so far. In particular, I will present an improved enumeration-based lattice reduction algorithm, which is shown to be (potentially) relevant to cryptanalysis. In the second part, we will instead consider a quantum problem that is computationally equivalent to approx-SVP. By directly solving a quantum problem, we may expect to have a more powerful use of the quantum computation. However, the best known algorithms for solving approx-SVP via solving this quantum problem, is not better than lattice reduction yet.

# Tuesday April 13, 2021

In this talk, we will be interested in the following problem:
given an ideal I of a number field, one wants to find an element $x\in I$
such that the relative ideal $(x)I^{-1}$ is smooth/prime/near-prime/…
This algorithmic problem happens naturally in multiple number theoretic
algorithms, for instance in an algorithm computing the power residue
symbol (de Boer-Pagano 2017) or in algorithms computing class-groups and
unit groups of number fields (Buchmann 1988/ Biasse-Fieker 2014). The
usual way to solve this problem is *heuristic*, and consists in sampling
random elements $x$ in $I$ until a good $x$ is found (the heuristic part comes
when estimating the probability that a random $x$ is good).

In this talks, I will present an algorithm that finds a good $x$ in a
*provable* way. The idea is still to sample random $x$’s until a good one
is found, but our sampling procedure is slightly different from what was
done before, and this enables us to obtain a provable lower bound on the
probability to obtain a good $x$.
This allows us to remove one heuristic (among others) of the algorithms
mentioned above.

This is a joint work with Koen de Boer, Léo Ducas and Benjamin Wesolowski.

# Tuesday March 30, 2021

# Friday March 12, 2021

# Tuesday February 16, 2021

The recent improvements of the tower number field sieve (TNFS) attacks on the DLP in finite fields extensions of composite degree have caused serious complications in pairing-based cryptography. In particular, the secure and efficient examples of pairings we had in the pre-TNFS period are no longer safe for implementations and hence key sizes should be updated. The purpose of this talk is to present ways for constructing efficient elliptic curve pairings that are resistant against the improved TNFS attacks and to address the main challenges and difficulties that occur as a result of these attacks. In addition, I will present a theoretical study on the best elliptic curve options that are available today, with respect to security and efficiency, and highlight a certain pairing-related topics for future work.

# Tuesday February 9, 2021

A zero-knowledge proof is a method by which one can prove knowledge of general non-deterministic polynomial (NP) statements. SNARKs are in addition non-interactive, short and cheap to verify. They use bilinear pairings over elliptic curves to achieve succinctness. This property makes them suitable for recursive proof composition, that is proofs attesting to the validity of other proofs. Recursive proof composition has been empirically demonstrated for pairing-based SNARKs via tailored constructions of expensive pairing-friendly elliptic curves. This presentation is about these tailored constructions. This is a joint work with Aurore Guillevic from CARAMBA team (Nancy).

# Monday January 18, 2021

We introduce a new signature scheme, SQISign, (for Short Quaternion and Isogeny Signature) from isogeny graphs of supersingular elliptic curves. The signature scheme is derived from a new one-round, high soundness, interactive identification protocol. Targeting the post-quantum NIST-1 level of security, our implementation results in signatures of 204 bytes, secret keys of 16 bytes and public keys of 64 bytes. In particular, the signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. On a modern workstation, our implementation in C takes 0.6s for key generation, 2.5s for signing, and 50ms for verification.

While the soundness of the identification protocol follows from classical assumptions, the zero-knowledge property relies on the second main contribution of this paper. We introduce a new algorithm to find an isogeny path connecting two given supersingular elliptic curves of known endomorphism rings. A previous algorithm to solve this problem, due to Kohel, Lauter, Petit and Tignol, systematically reveals paths from the input curves to a `special’ curve. This leakage would break the zero-knowledge property of the protocol. Our algorithm does not directly reveal such a path, and subject to a new computational assumption, we prove that the resulting identification protocol is zero-knowledge.

# Thursday February 13, 2020

The famous RSA public key cryptosystem is one of the most studied topic in cryptology research. In Eurocrypt 1996, Coppersmith proposed a new method to find small roots of polynomial equations using lattice. After that many cryptanalyses have been proposed on RSA and its variants using this technique. In this talk, we will discuss Coppersmith’s attack and some of applications on RSA.

# Thursday February 6, 2020

Given two $\ell$-isogenous elliptic curves over a field $k$ of large characteristic, an algorithm by Elkies uses modular polynomials to compute this isogeny explicitly. It is an essential primitive in the well-known SEA point counting algorithm.

In this talk, we generalize Elkies’s algorithm to compute isogenies between Jacobians of genus 2 curves: we compute either $\ell$-isogenies or, in the RM case, cyclic $beta$-isogenies. The algorithm uses modular equations of Siegel or Hilbert type respectively. As in genus 1, there are two main steps:

- Use derivatives of modular equations to compute the action of the isogeny on differential forms;
- Solve a differential system to find the isogeny.

This algorithm has implications for point counting in genus 2: Elkies’s method is now available, and could improve on the Schoof-Pila approach.

# Monday January 27, 2020

When designing block ciphers, we need to make decisions on which specific components to use such as which S-box, which linear layer etc. These decisions are made by taking into account the security of the resulting block cipher, but also the underlying cost in term of performances. In this talk, I will present two different works aiming at finding optimal components w.r.t a given criteria, while also trying to keep a very cheap implementation cost. I will show how the use of automatic tools (either already existing or specifically designed) can help us to make (near) optimal decisions while the initial search space is very large.

The first part of my talk will be focused on a block cipher construction called Generalized Feistel Networks, which is a very efficient construction in general. The idea of this construction is to split the plaintext into an even number of blocks, apply a Feistel transformation in parallel to those blocks, and then permute them. In a recent paper accepted at FSE 2020, we showed how to choose this permutation to be optimal w.r.t a certain criterion (called “diffusion round”), which also allowed us to solve a 10-year old problem.

The second part of my talk will be based on a paper accepted at SAC 2018, which aims at replacing the key-schedule of AES by a more efficient one (namely a permutation). AES is the current symmetric encryption standard, and its key-schedule (i.e. the algorithm allowing to transform the master key into several round keys) is quite intricate and costly. We thus studied what would happen if we replace this key-schedule by a permutation, and especially we show that we can get better security arguments even though we have a much simpler key-schedule.

# Friday January 17, 2020

For enabling post-quantum cryptanalytic experiments on a meaningful scale, there is a strong need for low-memory algorithms. We show that the combination of techniques from representations, multiple collisions finding, and the Schroeppel-Shamir algorithm leads to improved low-memory algorithms. For random subset sum instances $(a_1, \ldots, a_n, t)$ defined modulo $2^n$, our algorithms improve over the Dissection technique for small memory $M< 2^{0.02n}$ and in the mid-memory regime $2^{0.13n} < M < 2^{0.2n}$. An application of our technique to LPN of dimension $k$ and constant error $p$ yields significant time complexity improvements over the Dissection-BKW algorithm from Crypto 2018 for all memory parameters $M < 2^{0.35 k/\log k}$. This is joint work with Andre Esser and Alexander May, published at IMACC 2019.

# Monday January 13, 2020

Group signatures, introduced by Chaum and van Heyst in 1991, enable members of a group to sign on behalf of the group, thanks to a certificate delivered by a group manager. The point is that the signature is anonymous: it cannot be traced back to its issuer, except for a specific entity, the opening authority, which can “open” any valid group signature.

Bellare, Micciancio and Warinschi provided the first security model where they defined the requirements for a group signature: anonymity, traceability and non-frameability.

Combining seemingly contradictory properties such as anonymity and traceability can be done through the Sign-Encrypt-Prove (SEP) framework: after signing a message, a group member encrypts both the signature and the certificate then proves in a non-interactive zero-knowledge fashion that everything is well formed (the opening authority still has a way to rescind the anonymity).

In this talk, we will construct a group signature that uses the nice interaction between Pointcheval and Sanders signatures and Fuchsbauer, Hanser and Slamanig equivalence-class signatures. Group members then remain anonymous by simply randomizing their signatures and certificates: there is no longer the need for encryption and proving.

# Thursday December 12, 2019

Quantum cryptanalysis studies how to break cryptographic primitives using a quantum computer. Multiple quantum algorithms for the hidden shift and hidden period problems have been proposed, which are vastly more efficient than their classical counterpart. In this talk, I will present some of them, with their applications to cryptanalyze multiple symmetric constructions and some isogeny-based key exchanges.

In more details, I will present Simon’s algorithm and its use to attack multiple symmetric constructions using quantum queries, the offline approach which allows to apply it even if we restrict to classical queries, and some abelian hidden shift algorithms and their applications against some symmetric constructions and the key exchange CSIDH.

# Thursday July 4, 2019

# Tuesday June 4, 2019

Exact Real Arithmetic describes implementations of Computable Analysis (TTE), which can be considered as a foundation of reliable computations on continuous data. TTE is a framework which allows to express and reason about real number computations on digital computers or Turing machines in a rigorous manner.

In this talk we report on recent applications of ERA to check satisfiability of non-linear constraints over real numbers in the setting of SMT. In particular, we present a CDCL style calculus and a prototypical implementation for this concrete problem. The procedure is based on symbolical methods to resolve linear conflicts and on exact real computations to construct linearisations of non-linear constraints local to conflicts. This approach allows the treatment of a large number of computable non-linear real functions such as polynomials, rational or trigonometrical functions and beyond. The implementation has been evaluated on several non-linear SMT-Lib examples and the results have been compared with state-of-the-art SMT solvers.

# Friday May 24, 2019

Since their first introduction as a cryptanalytic tool, bilinear pairings have moved to a very useful building block for cryptography. Current real-world applications of pairings include remote attestation, privacy-preserving blockchains and identity-based cryptography. However, recent advances in the discrete log computation have reduced the efficiency of the most popular pairing constructions and previous candidates for standardization. This talk discusses ongoing work regarding the efficient implementation of new sets of parameters for bilinear groups, to adjust security against these recent attacks, and techniques for their realization in software.

# Thursday May 23, 2019

Let $A$, $B$ in $K[X,Y]$ be two bivariate polynomials over an effective field $K$, and let $G$ be the reduced Gröbner basis of the ideal $I = \langle A,B\rangle$ generated by $A$ and $B$, with respect to some weighted-degree lexicographic order. Under some genericity assumptions on $A,B$, we will see how to reduce a polynomial with respect to $G$ with quasi-optimal complexity, despite the fact that $G$ is much larger than the intrinsic complexity of the problem. For instance, if $A$,$B$ have total degree $n$, that is $O(n^{2})$ coefficients, then $G$ has $O(n^{3})$ coefficients but reduction can be done in time $O(n^{2})$.

We will consider two situations where these new ideas apply, leading to different algorithms:

- First, there is a class called “vanilla Gröbner bases” for which there is a so-called terse representation that, once precomputed, allows to reduce any polynomial $P$ in time $O(n^2)$. In this setting, assuming suitable precomputation, multiplication and change of basis can therefore be done in time $O(n^2)$ in the quotient algebra $K[X,Y] / \langle A,B\rangle$.
- Then, we assume that $A$ and $B$ are given in total degree and we consider the usual degree lexicographic order. Although the bases are not vanilla in this case, they admit a so-called concise representation with similar properties. Actually, the precomputation can also be done efficiently in this particular setting: from the input $A$,$B$, one can compute a Gröbner basis in concise representation in time $O(n^{2})$. As a consequence, multiplication in $K[X,Y] / \langle A,B\rangle$ can be done in time $O(n^2)$ including the cost of precomputation.

# Tuesday May 14, 2019

In this talk we consider CSIDH, a new post-quantum key-exchange algorithm based on the hardness of constructing an unknown isogeny between two given elliptic curves. We will describe some algorithmic improvements, and also some important relations between the underlying “hard” problems in both the quantum and classical worlds.

# Tuesday April 2, 2019

Gröbner bases are one the most powerful tools in algorithmic non-linear algebra. Their computation is an intrinsically hard problem with complexity at least single exponential in the number of variables. However, in most of the cases, the polynomial systems coming from applications have some kind of structure. For example, several problems in computer-aided design, robotics, vision, biology, kinematics, cryptography, and optimization involve sparse systems where the input polynomials have a few non-zero terms.

An approach to exploit sparsity is to embed the systems in a semigroup algebra and to compute Gröbner bases over this algebra. Prior to our work, the algorithms that follow this approach benefited from the sparsity only in the case where all the polynomials have the same sparsity structure, that is the same Newton polytope. In this talk, I will present the first algorithm that overcomes this restriction. Under regularity assumptions, it performs no redundant computations. Further, I will use it to compute Gröbner basis in the standard algebra and solve sparse polynomials systems over the torus. The complexity of the algorithm depends on the Newton polytopes and it is similar to the complexity of the solving techniques involving the sparse resultant. Hence, this algorithm closes a 25-years gap between the strategies to solve systems using resultants and Gröbner bases. Additionally, for particular families of sparse systems, I will use the multigraded Castelnuovo-Mumford regularity to improve the complexity bounds.

This is joint work with Jean-Charles Faugère and Elias Tsigaridas.

# Thursday February 14, 2019

MDS matrices are essential in symmetric-key cryptography since they provide optimal diffusion in block ciphers. Many MDS matrices are known, but a problem remains which gathers a lot of attention: finding lightweight MDS matrices.

Several approaches exist, namely reducing the cost of already-known MDS matrices (to improve existing ciphers) and finding new MDS matrices lighter than the known ones (to make new ciphers).

We focus on the second case and, contrarily to the usual approach, we will not look for matrices whose coefficients are lightweight to implement. Rather than this local optimization, we will prefer a global optimization of the whole matrix, which allows reusing of intermediate values.

We propose an algorithm to search for lightweight formal MDS matrices on a polynomial ring, by enumerating circuits until we reach an MDS matrix. This approach allows us to get much better results than previous works. We also adapt this algorithm to look for almost-MDS matrices, which offer a trade-off between cost and security.

# Thursday January 31, 2019

Almost two decades ago, Goldreich proposed a one-way function construction where output bits depends in a constant number of input bits. Afterwards, this one-way function construction was extended to Pseudo Random Generators.

In the polynomial regime, where the seed is of size $n$ and the output of size $n^s$ for $s > 1$, the only known solution, commonly known as Goldreich’s PRG, proceeds by applying a simple $d$-ary predicate (Boolean function of $d$ variables) to public random size-$d$ subsets of the bits of the seed.

The security of this PRG has been widely investigated, and was proven to be secure against a large class of attacks. However, by applying symmetric cryptanalysis techniques such as Guess-and-determine and Gröbner Basis, we were able to find sub-exponential time algorithms that recover the seed of this PRG. Moreover, as the predicate can be identified as a Boolean function, we also derive new security criteria that rely on the predicate that is used in this PRG. Finally, the complexity of our algorithms is counted in a non-asymptotic way, leading to a better understanding of the security of this Pseudo-Random Generator.

The results of this work were published at ASIACRYPT 2018, and a full version of the paper can be found at this link.

Joint work with Geoffroy Couteau and Aurélien Dupin and Pierrick Méaux and Mélissa Rossi.

# Monday January 28, 2019

S-boxes are components used by many symmetric cryptographic algorithms, most prominently the AES. They are small non-linear functions whose properties are crucial in ensuring the security of the whole cipher against some attacks.

In this talk, I will present my work on the decomposition of S-boxes. Intuitively, a decomposition is a “simple” algorithm evaluating the S-box. The knowledge of a decomposition can allow a significantly improved implementation but, on the other hand, it can also reveal cryptographic flaws as particular decompositions can allow attacks. In fact, we could imagine that a malevolent designer would purposefully hide such a “backdooring” decomposition in their S-box. As an illustration, I will present some results on the S-box used by the last two Russian standards in symmetric cryptography. It turns out to exhibit very strong algebraic properties that warrant further investigation into the security of these algorithms.

Then, I will present a more surprising application of these techniques to a more theoretical problem. We were able to find a decomposition of the only known solution of a long-standing open problem in the design of S-boxes, namely the big APN problem (it deals with constructing S-boxes with optimal resistance against differential attack). This allowed us to find the first generalization of this function—though we could not use them to solve the big APN problem. On the bright side, the type of decomposition we found in this permutation has a deeper meaning than we first expected as it turns out to play a key role in “CCZ-equivalence” (a form of equivalence between S-boxes).

I will conclude with several open problems that have been opened by this line of research.

# Thursday December 13, 2018

The elliptic curve method (ECM) is a factorization algorithm widely used in cryptography. It was proposed in 1985 by Lenstra and improved a couple of months later by Montgomery using well-chosen families of curves.

Algorithm succeeds in factoring if the elliptic curves it uses have desirable smoothness properties. These smoothness properties can be quantified using the mean valuation of a prime l in the cardinality of an elliptic curve E modulo random primes.

In this joint work with Razvan Barbulescu, we present a brief survey of previous works in this regard and then present two approaches to find these ECM-friendly elliptic curves. Combined with recent works of [1] and [2], we obtain a complete list of 1540 such families.

# Thursday November 29, 2018

The Elliptic Curve Method (ECM) is the asymptotically fastest known method for finding medium-size prime factors of large integers. It is also a core ingredient of the Number Field Sieve (NFS) for integer factorization and its variants for computing discrete logarithms. In NFS and its variants, ECM is extensively used in the cofactorization step (a subroutine of the sieving phase) which consists of breaking into primes billions of composite integers of a hundred-ish bits.

In this talk, we propose a novel implementation of ECM in the context of NFS that requires fewer modular multiplications than any other publicly available implementation. The main ingredients are: Dixon and Lenstra’s idea on primes combination, fast point tripling, optimal double-base decompositions and Lucas chains, and a good mix of Edwards and Montgomery representations.

# Friday September 7, 2018

Counting points on algebraic curves has drawn a lot of attention due to its many applications from number theory and arithmetic geometry to cryptography and coding theory. In this thesis, we focus on counting points on hyperelliptic curves over finite fields of large characteristic $p$. In this setting, the most suitable algorithms are currently those of Schoof and Pila, because their complexities are polynomial in $\log p$. However, their dependency in the genus $g$ of the curve is exponential, and this is already painful even in genus 3. Our contributions mainly consist of establishing new complexity bounds with a smaller dependency in $g$ of the exponent of $\log p$. For hyperelliptic curves, previous work showed that it was quasi-quadratic, and we reduced it to a linear dependency. Restricting to more special families of hyperelliptic curves with explicit real multiplication (RM), we obtained a constant bound for this exponent. In genus 3, we proposed an algorithm based on those of Schoof and Gaudry-Harley-Schost whose complexity is prohibitive in general, but turns out to be reasonable when the input curves have explicit RM. In this more favorable case, we were able to count points on a hyperelliptic curve defined over a 64-bit prime field.

# Monday June 11, 2018

There have been recent advances in solving the finite extension field discrete logarithm problem as it arises in the context of pairing-friendly elliptic curves. This has lead to the abandonment of approaches based on supersingular curves of small characteristic, and to the reconsideration of the field sizes required for implementation based on non-supersingular curves of large characteristic. This has resulted in a revision of recommendations for suitable curves, particularly at a higher level of security. Indeed for AES-256 levels of security the BLS48 curves have been suggested, and demonstrated to be superior to other candidates. These curves have an embedding degree of 48. The well known taxonomy of Freeman, Scott and Teske only considered curves with embedding degrees up to 50. Given some uncertainty around the constants that apply to the best discrete logarithm algorithm, it would seem to be prudent to push a little beyond 50. In this note we announce the discovery of a new family of pairing friendly elliptic curves which includes a new construction for a curve with an embedding degree of 54.

Joint work with Michael Scott (Miracl).

# Tuesday June 5, 2018

Since 1960 and the result of Karatsuba, we know that the complexity of the multiplication (of integers or polynomials) is sub-quadratic: given a ring $R$, the product in $R[X]$ of polynomials $a_0 + a_1 X$ and $b_0 + b_1 X$, for any $a_0$, $a_1$, $b_0$ and $b_1$ in $R$, can be computed with three and not four multiplications over $R$: $(a_0 + a_1 X) (b_0 + b_1 X) = m_0 + (m_2 - m_0 - m_1)X + m_1 X^2$, with the three multiplications $m_0 = a_0b_0$, $m_1 = a_1b_1$, and $m_2 = (a_0 + a_1)(b_0 + b_1)$. In the same manner, Strassen’s algorithm allows one to multiply two matrices $2n\times 2n$ with only seven products of matrices $nxn$.

The two previous examples fall in the category of bilinear maps: these are functions of the form $\Phi: K^{m} \times K^{n} \rightarrow K^{l}$, given a field $K$, linear in each variable. Among the most classical bilinear maps, we have the multiplication of polynomials, matrices, or even elements of algebraic extension of finite fields. Given a bilinear map $Phi$, computing the minimal number of multiplications necessary to the evaluation of this map is a NP-hard problem. The purpose of this thesis is to propose algorithms minimizing this number of multiplications. Two angles of attack have been studied.

The first aspect of this thesis is to study the problem of the computation of the bilinear complexity under the angle of the reformulation of this problem in terms of research of matrix subspaces of a given rank. This work led to an algorithm taking into account intrinsic properties of the considered products such as matrix or polynomial products over finite fields. This algorithm allows one to find all the possible decompositions, over $\mathbb{F}_2$, for the product of polynomials modulo $X^{5}$ and the product of matrices 3x2 by 2x3.

Another aspect of this thesis was the development of fast asymptotic methods for the integer multiplication. There is a particular family of algorithms that has been proposed after an article by Fürer published in 2007. This article proposed a first algorithm, relying on fast Fourier transform (FFT), allowing one to multiply $n$-bit integers in $O(n \log n 2^{O(\log^{*}n)})$, where $\log^{*}$ is the iterated logarithm function. In this thesis, an algorithm, relying on a number theoretical conjecture, has been proposed, involving the use of FFT and generalized Fermat primes. With a careful complexity analysis of this algorithm, we obtain a complexity in $O(n \log n 4^{\log^{*}n})$.

# Monday June 4, 2018

In this talk, I will explain techniques to achieve fast and secure implementations. I will introduce a vector unit, which is a part of a CPU, and ways to utilize it. I will also briefly emphasize the importance of and ways to prevent software side-channel attacks. Then, I will explain how to optimize scalar multiplication in Curve41417 and polynomial multiplication in Streamlined NTRU Prime $4591^{761}$. Karatsuba’s method plays an important role in the former case, while combinations of Karatsuba’s method and Toom–Cook’s method are crucial in the latter case. Both implementations utilize the CPU’s vector unit.

# Friday June 1, 2018

# Thursday March 29, 2018

This talk focuses on the relation collection phase (or harvesting) in Index-Calculus algorithms to compute discrete logs in an algebraic curve. I will present results from my thesis, improving two different types of harvesting. First, in the general case, we propose a sieving approach, which is a time/memory trade-off that can improve the practical running time of the algorithms. Second, when the target curve is defined over an extension of a finite field, the harvesting can be presented as the resolutions of multivariate systems. When the curve is hyperelliptic and the field has characteristic two, we show that several equations in the involved systems are "squares". We then exploit this property to improve the asymptotic complexity of the harvesting phase. The impact on practical computations allows to estimate the running time of a complete Index-calculus on a curve whose associated group has ~2^{184} elements. In term of practical difficulty, it compares to recent records over finite fields.

Joint work with Vanessa Vitse and Jean-Charles Faugère.

# Thursday February 1, 2018

We study the geometry of units and ideals of cyclotomic rings, and derive an algorithm to find a mildly short vector in any given cyclotomic ideal lattice in quantum polynomial time, under some plausible number-theoretic assumptions. More precisely, it finds an approximation of the shortest vector by a subexponential factor. This result exposes an unexpected hardness gap between these structured lattices and general lattices: the best known polynomial time generic lattice algorithms can only reach an exponential approximation factor. Following a recent series of attacks, these results call into question the hardness of various problems over structured lattices, such as Ideal-SVP and Ring-LWE, upon which relies the security of a number of cryptographic schemes.

This is a joint work with Ronald Cramer, Léo Ducas, Chris Peikert, and Oded Regev.

# Monday December 11, 2017

A folklore conjecture (Cascudo-Cramer-Xing-Yang 2012, Lemma IV.4 and Ballet-Pieltant-R.-Sijsling 2017 §3 for counterexamples) states that, for all $p$ prime number and $2t$ an even integer, there exists a family of function fields over the prime field $\mathbb{F}_{p}$ such that:

- (i) the genera tend to infinity;
- (ii) the ratio of two successive genera tends to 1 (density condition) and
- (iii) after field extension to $\mathbb{F}_{p^{2t}}$, the asymptotic number of points reaches the Ihara bound.

The only cases known so far are for $t$=1, with the classical modular curves $X_{0}(N)$ over $\mathbb{F}_{p}$. We present here an explicit family of Shimura curves solving the case $p=3$ and $2t=6$.

The talk will recall facts on the descent of covering maps, and illustrate recent numerical methods of Sijsling-Voight.

# Tuesday November 7, 2017

# Thursday September 21, 2017

# Wednesday July 5, 2017

We recap the theory of modular polynomials and isogeny graphs of (ordinary) elliptic curves and give a natural generalisation to abelian $g$-folds (e.g. the Jacobian of a genus $g$ curve). This is joint work with Marco Streng.

We then recap the algorithm of Schoof, Elkies, and Atkin for efficient point counting on elliptic curves over finite fields, and show how to generalise the algorithm to genus 2 curves over finite fields (with maximal real multiplication), using our generalisation of modular polynomials to genus 2. Under some heuristic assumptions, this is the fastest known algorithm to count points on genus 2 curves over large prime fields. This is joint work with Sean Ballentine, Aurore Guillevic, Elisa Lorenzo-Garcia, Maike Massierer, Ben Smith, and Jaap Top.

# Thursday November 10, 2016

In this talk, we present recent results about fast algorithms for fundamental operations for matrices over K[X] (univariate polynomials). We mainly focus on the computation of normal forms, obtained by transforming the input matrix via elementary row operations. Depending on a degree measure specified by a “shift”, these normal forms range from the Hermite form, which is triangular, to the Popov form, which minimizes the degrees of the rows.

For Popov or Hermite forms of $m \times m$ nonsingular matrices with entries of degree $<d$, the previous fastest algorithms use $\widetilde O(m^\omega d)$ operations, where $\omega$ is the exponent of K-linear algebra. We will discuss improvements in three directions: (1) improving the cost bound to $\widetilde O(m^\omega D/m)$, where $D/m$ is a type of average degree of the input matrix; (2) derandomizing Hermite form computation [*]; (3) achieving fast computation for arbitrary shifts. The last point will lead us to consider the problem of solving systems of linear modular equations over $K[X]$, which generalizes Hermite-Pade approximation and rational reconstruction.

[*] joint work with George Labahn and Wei Zhou (U. Waterloo, ON, Canada).

# Friday June 17, 2016

# Tuesday April 19, 2016

# Monday November 23, 2015

Groebner basis is an important tool in computational ideal theory, and the term ordering plays an important role in the theory of Groebner bases. In particular, the common strategy to solve a polynomial system is to first compute the basis of the ideal defined by the system w.r.t. DRL, change its ordering to LEX, and perhaps further convert the LEX Groebner basis to triangular sets.

Given a zero-dimensional ideal $I \subset \mathbb{K}[x_{1},…,x_{n}]$ of degree $D$, the transformation of the ordering of its Groebner basis from DRL to LEX turns out to be the bottleneck of the whole solving process. Thus it is of crucial importance to design efficient algorithms to perform the change of ordering.

In this talk we present several efficient methods for the change of ordering which take advantage of the sparsity of multiplication matrices in the classical FGLM algorithm. Combining all these methods, we propose a deterministic top-level algorithm that automatically detects which method to use depending on the input. As a by-product, we have a fast implementation that is able to handle ideals of degree over 60000. Such an implementation outperforms the Magma and Singular ones, as shown by our experiments.

First for the shape position case, two methods are designed based on the Wiedemann algorithm: the first is probabilistic and its complexity to complete the change of ordering is $O(D(N_{1}+n \log(D)^{2}))$, where $N_{1}$ is the number of nonzero entries of a multiplication matrix; the other is deterministic and computes the LEX Groebner basis of $\sqrt I$ via Chinese Remainder Theorem. Then for the general case, the designed method is characterized by the Berlekamp-Massey-Sakata algorithm from Coding Theory to handle the multi-dimensional linearly recurring relations. Complexity analyses of all proposed methods are also provided.

Furthermore, for generic polynomial systems, we present an explicit formula for the estimation of the sparsity of one main multiplication matrix, and prove that its construction is free. With the asymptotic analysis of such sparsity, we are able to show that for generic systems the complexity above becomes $O(c\sqrt{6/n \pi}D^{2+(n-1)/n})$.

This talk is based on joint work with Jean-Charles Faugere.

# Monday November 16, 2015

We use a version of Viro’s method to construct polynomial systems with many positive solutions. We show that if a polytope admits an unimodular regular triangulation whose dual graph is bipartite, then there exists an unmixed polynomial system with this polytope as Newton polytope and which is maximally positive in that all its toric complex solutions are in fact real positive solutions. We present classical families of polytopes which admit such triangulations. These examples give evidence in favor of a conjecture due to Bihan which characterizes affine relations inside the support of a maximally polynomial system. We also use our construction to get polynomial systems with many positive solutions considering a simplicial complex contained in a regular triangulation of the cyclic polytope. This is joint work with Pierre-Jean Spaenlehauer (INRIA Nancy).

# Wednesday October 14, 2015

Kedlaya’s algorithm computes the zeta function of a hyperelliptic curve over a finite field using the theory of $p$-adic cohomology. We have recently dev eloped and implemented a generalisation of this algorithm that works for (almost) any curve. First, we will outline the theory involved. Then we will describe our algorithm and illustrate the main ideas by giving some examples. Finally, if time permits, we will talk about some current and future work of ours with various coauthors on improving the algorithm and applying it in other settings.

# Wednesday September 9, 2015

Advanced, bespoke cryptographic applications are emerging for large-scale use by the general population. A good example is cryptographic electronic voting systems, which make extensive use of sophisticated cryptographic techniques to help attain strong security properties (such as secrecy and verifiability) that are required due to the failure-critical nature of public elections. Recently in Australia, cryptographic e-voting systems were used in two state elections: iVote, an Internet voting system, was used in New South Wales, and vVote, a polling place voting system, was used in Victoria.

However developing and deploying such complex and critical cryptographic applications involves a range of engineering challenges that have yet to be addressed in practice by industry and the research community. As with any complex, large-scale system, there were barriers to applying appropriately rigorous engineering practices in the two Australian e-voting systems. But since these e-voting systems are critical national infrastructure, such engineering practices are needed to provide high assurance of the systems and their required properties.

In this talk I will discuss some of the engineering challenges, practical barriers and issues, and what can be learned from the two recent Australian e-voting experiences.

# Thursday February 5, 2015

# Thursday November 20, 2014

We consider the discrete logarithm problem in the degree-0 Picard group of non-hyperelliptic curves of genus $g > 3$. A well known method to tackle this problem is by index calculus. In this talk we will present two algorithms based on the index calculus method. In both of them linear systems of small degree are generated and then used to produce relations. Analyzing these algorithms leads to several geometric questions. We will discuss some of them in more detail and state further open problems. At the end we will show some experimental results.

# Thursday November 20, 2014

# Monday November 17, 2014

Nowadays, when we want to factor large numbers, we use sieve algorithms like the number field sieve or the quadratic sieve. In these algorithms, there is an expensive part called the relation collection step. In this step, one searches a wide set of numbers to identify those which are smooth, i.e. integers where all prime divisors are less than a particular bound. In this search, a first step finds the smallest prime divisors with a sieving procedure and a second step tries to factor the remaining integers which are no longer divisible by small primes, using factoring methods like P-1, P+1 and ECM. This talk will introduce a procedure, following Kleinjung, to optimize the second step!

# Thursday October 23, 2014

The number field sieve (NFS) is the fastest publicly known algorithm for factoring RSA moduli. We show how the post sieving step, a compute-intensive part of the relation collection phase of NFS, can be farmed out to a graphics processing unit. Our implementation on a GTX 580 GPU, which is integrated with a state-of-the-art NFS implementation, can serve as a cryptanalytic co-processor for several Intel i7-3770K quad-core CPUs simultaneously. This allows those processors to focus on the memory-intensive sieving and results in more useful NFS-relations found in less time.

# Thursday September 11, 2014

In 1965 Buchberger introduced a first algorithmic approach to the computation of Groebner Bases. Over the last decades optimizations to this basic attempt were found. In this talk we discuss two main aspects of the computation of Groebner Bases: Predicting zero reductions is essential to keep the computational overhead and memory usage at a low. We show how Faugère’s idea, initially presented in the F5 algorithm, can be generalized and further improved. The 2nd part of this talk is dedicated to the exploitation of the algebraic structures of a Groebner Basis. Thus we are not only able to replace polynomial reduction by linear algebra (Macaulay matrices, Faugère’s F4 algorithm), but we can also specialize the Gaussian Elimination process for our purposes.

# Thursday July 3, 2014

To help minimise the running time of the number field sieve, it is desirable to select polynomials with balanced degrees in the polynomial selection phase. I will discuss algorithms for generating polynomial pairs with this property: those based on Montgomery’s approach, which reduce the problem to the construction of small modular geometric progressions; and an algorithm which draws on ideas from coding theory.

# Thursday June 19, 2014

# Thursday April 24, 2014

# Thursday March 13, 2014

Given an linear/affine space of matrices $E$ with real entries, a data matrix $U\in E$ and a target rank $r$, the Structured Low-Rank Approximation Problem consists in computing a matrix $M\in E$ which is close to $U$ (with respect to the Frobenius norm) and has rank at most $r$. This problem appears with different flavors in a wide range of applications in Engineering Sciences and symbolic/numeric computations.

We propose an SVD-based numerical iterative method which converges locally towards such a matrix $M$. This iteration combines features of the alternating projections algorithm and of Newton’s method, leading to a proven local quadratic rate of convergence under mild tranversality assumptions. We also present experimental results which indicate that, for some range of parameters, this general algorithm is competitive with numerical methods for approximate univariate GCDs and low-rank matrix completion (which are instances of Structured Low-Rank Approximation).

In a second part of the talk, we focus on the algebraic structure and on exact methods to compute symbolically the nearest structured low-rank matrix $M$ to a given matrix $U\in E$ with rational entries. We propose several ways to compute the algebraic degree of the problem and to recast it as a system of polynomial equations in order to solve it with algebraic methods.

The first part of the talk is a joint work with Eric Schost, the second part is a joint work with Giorgio Ottaviani and Bernd Sturmfels.

# Wednesday February 26, 2014

# Thursday December 19, 2013

Realizing in software the algebraic closure of a finite field $\mathbb{F}_{p}$ is equivalent to construct so called “compatible lattices of finite fields”, i.e. a collection of finite extensions of $\mathbb{F}_{p}$ together with embeddings $\mathbb{F}_{p^{m}} \subset $\mathbb{F}_{p^{n}}$ whenever $m\mid n$.

There are known algorithms to construct compatible lattices in deterministic polynomial time, but the status of the most practically efficient algorithms is still unclear. This talk will review the classical tools available, then present some new ideas towards the efficient construction of compatible lattices, possibly in quasi-optimal time.

# Friday December 6, 2013

# Thursday November 28, 2013

# Thursday November 7, 2013

# Friday July 19, 2013

# Wednesday July 17, 2013

# Tuesday July 16, 2013

# Monday July 15, 2013

Computing discrete logarithms in large cyclic groups using index-calculus-based methods, such as the number field sieve or the function field sieve, requires solving large sparse systems of linear equations modulo the group order. In this talk, we present how we use the Residue Number System (RNS) arithmetic to accelerate modular operations. The first part deals with the FFS case, where the matrix contains only small values. The second part discusses how we treat for NFS-DL, the particular dense columns corresponding to Schirokauer’s maps, where the values are large.

# Tuesday May 28, 2013

# Friday April 5, 2013

# Thursday March 28, 2013

# Friday March 15, 2013

The decision Learning With Errors (LWE) problem, introduced by Regev in 2005 has proven an invaluable tool for designing provably secure cryptographic protocols. We show that LWE is classically at least as hard as standard worst-case lattice problems, even with polynomial modulus. Previously this was only known under quantum reductions.

Our techniques capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.

This work has been done with Z. Brakerski, C. Peikert, O. Regev and D. Stehlé.

# Friday February 22, 2013

# Wednesday February 13, 2013

# Thursday January 31, 2013

The hardness of the (hyper)elliptic curve discrete logarithm problem over extension fields lies in the trace zero variety. A compact representation of the points of this abelian variety is needed in order to accurately assess the hardness of the discrete logarithm problem there. Such representations have been proposed by Lange for genus 2 curves and by Silverberg for elliptic curves. We present a new approach for elliptic curves. It is made possible by a new equation for the variety derived from Semaev’s summation polynomials. The new representation is optimal in the sense that it reflects the size of the group, it is compatible with the structure of the variety, and it can be computed efficiently.

# Thursday January 17, 2013

# Thursday December 20, 2012

Classifying lattice walks in restricted lattices is an important problem in enumerative combinatorics. Recently, computer algebra methods have been used to explore and solve a number of difficult questions related to lattice walks. In this talk, we will give an overview of recent results on structural properties and explicit formulas for generating functions of walks in the quarter plane, with an emphasis on the algorithmic methodology.

# Thursday December 20, 2012

Real solving polynomial systems is a topical issue for many applications. Exact algorithms, using computer algebra techniques, have been deployed to answer various specifications such that deciding the existence of solutions, answer connectivity queries or one block-real quantifier elimination. In this talk, we will review some recent on-going works whose aims are to exploit algebraic and geometric properties in order to provide faster algorithms in theory and in practice.

# Thursday November 29, 2012

# Friday November 23, 2012

# Thursday November 15, 2012

Polynomial systems of equations appearing in cryptography tend to have special structures that simplify their resolution. In this talk, we discuss a class of polynomial systems arising after deploying a multivariate polynomial equation over an extension field into a system of polynomial equations over the ground prime field (a technique commonly called Weil descent).

We provide theoretical and experimental evidence that the degrees of regularity of these systems are very low, in fact only slightly larger than the maximal degrees of the equations.

We then discuss cryptographic applications of (particular instances of) these systems to the hidden field equation (HFE) cryptosystem, to the factorization problem in $\operatorname{SL}(2, 2^{n})$ and to the elliptic curve discrete logarithm over binary fields. In particular, we show (under a classical heuristic assumption in algebraic cryptanalysis) that an elliptic curve index calculus algorithm due to Claus Diem has subexponential time complexity $O(2^{cn^{2/3} \log n})$ over the binary field $\operatorname{GF}(2^{n})$, where $c$ is a constant smaller than 2.

Based on joint work with Jean-Charles Faugère, Ludovic Perret, Jean-Jacques Quisquater and Guénaël Renault.

# Tuesday November 6, 2012

# Thursday September 27, 2012

Numerical linear algebra operations are ubiquitous in many challenging academic and industrial applications. This talk will give an overview of the evolution of numerical linear algebra algorithms and software, and the major changes such algorithms have undergone following the breakthroughs in the hardware of high performance computers. A specific focus of this talk will be on communication avoiding algorithms. This is a new and particularly promising class of algorithms, which has been introduced in the late 2008 as an attempt to address the exponentially increasing gap between computation time and communication time - one of the major challenges faced today by the high performance computing community. I will also discuss novel preconditioning techniques for accelerating the convergence of iterative methods.

# Friday July 20, 2012

Taking square roots over finite fields is a classical number theoretical problem that has capture the attention of researchers across the centuries. Nowadays, the computation of square roots is especially relevant for elliptic curve cryptosystems, where hashing an arbitrary message to a random point that belongs to a given elliptic curve, point compression and point counting over elliptic curves, are among its most relevant cryptographic applications.

In this talk, we present two new algorithms for computing square roots over finite fields of the form $\mathbb{F}_{q}$, with $q = p^{m}$ and where $p$ is a large odd prime and $m$ an even integer. The first algorithm is devoted to the case when $q \equiv 3 \pmod 4$, whereas the second handles the complementary case when $q \equiv 1 \pmod 4$. We include numerical comparisons showing the efficiency of our algorithms over the ones previously published in the open literature.

# Friday June 29, 2012

We describe a unified framework to search for optimal formulae evaluating bilinear or quadratic maps. This framework applies to polynomial multiplication and squaring, finite field arithmetic, matrix multiplication, etc. We then propose a new algorithm to solve problems in this unified framework. With an implementation of this algorithm, we prove the optimality of various published upper bounds, and find improved upper bounds.

# Friday June 29, 2012

In this paper we prove some divisibility properties of the cardinality of elliptic curves modulo primes. These proofs explain the good behavior of certain parameters when using Montgomery or Edwards curves in the setting of the elliptic curve method (ECM) for integer factorization. The ideas of the proofs help us to find new families of elliptic curves with good division properties which increase the success probability of ECM.

# Wednesday June 20, 2012

# Wednesday April 25, 2012

# Tuesday April 3, 2012

# Friday March 30, 2012

There is a strong interplay between computing efficiently Gröbner bases and linear algebra. In this talk, we focus on several aspects of this convergence:

- Algorithmic point of view: algorithms for computing efficiently Gröbner bases
(F4, F5, FGLM, …) rely heavily on efficient linear algebra.
- The matrices generated by these algorithms have unusual properties: sparse, almost block triangular. We present a dedicated algorithm for computing Gaussian elimination of Gröbner bases matrices.
- By taking advantage of the sparsity of multiplication matrices in the classical FGLM algorithm we can design an efficient algorithm to change the ordering of a Gröbner basis. The algorithm is a related to multivariate generalization of the Wiedemann algorithm. When the matrices are not sparse, for generic systems, the complexity is $\widetilde (D^{\omega})$ where $D$ is the number of solutions and $\omega \leq 2.3727$ is the linear algebra constant.
- Mixing Gröbner bases methods and linear algebra technique for solving sparse linear systems leads to an efficient algorithm to solve Boolean quadratic equations over $\mathbb{F}_{2}$; this algorithm is faster than exhaustive search by an exponential factor

- Application point of view: for instance, a generalization of the eigenvalue problem to several matrices – the MinRank problem – is at the heart of the security of many multivariate public key cryptosystems.
- Design of C library: we present a multi core implementation of these new algorithms; the library contains specific algorithms to compute Gaussian elimination as well as specific internal representation of matrices.The efficiency of the new software is demonstrated by showing computational results for well known benchmarks as well as some crypto applications.

Joint works with S. Lachartre, C. Mou, P. Gaudry, L. Huot, G. Renault, M. Bardet, B. Salvy, P.J. Spaenlehauer and M. Safey El Din.

# Tuesday March 20, 2012

One of the most widely used applications of elliptic-curve cryptography is digital signatures. In this talk I will present the EdDSA signature scheme that improves upon previous elliptic-curve-based signature schemes such as ECDSA or Schnorr signatures in several ways. In particular it makes use of fast and secure arithmetic on Edwards curves, it is resilient against hash-function collisions and it supports fast batch verification of signatures. I will furthermore present performance results of Ed25519, EdDSA with a particular set of parameters for the 128-bit security level.

# Wednesday March 14, 2012

# Wednesday March 7, 2012

# Wednesday February 29, 2012

# Thursday February 2, 2012

# Friday January 20, 2012

Public-key cryptography relies on the existence of computationaly hard problems. It is not widely accepted that a public-key scheme is worthless without a “security proof”, i.e., a proof that if an attacker breaks the scheme, then she solves in passing an instance of an intractable computational problem. As long as the hard problem is intractable, then the scheme is secure. The most well-known hardness assumptions of this kind are probably the hardness of integer factoring, or that of taking logarithms in certain groups.

In this talk we focus on multivariate cryptography, a label covering all the (mostly public-key) schemes explicitly relying on the hardness of solving systems of polynomial equations in several variables over a finite field. The problem, even when restricted to quadratic polynomials, is well-known to be NP-complete. In the quadratic case, it is called MQ. Interestingly, most schemes in this area are not “provably secure”, and a lot of them have been broken because they relied on another, less well-known, computational assumption, the hardness of Polynomial Linear Equivalence (PLE), which is a higher-degree generalization the problem testing whether two matrices are equivalent.

In this talk I will present the algorithms I designed to tackle these two hard problems. I will show that 80 bits of security are not enough for MQ to be practically intractable, and I will present faster-than-before, sometimes practical algorithms for various flavors of PLE.

# Wednesday November 30, 2011

In this talk, I will present a new implementation of the Elliptic Curve Method algorithm (ECM) for graphic cards (GPU). This implementation uses Montgomery’s curve, like GMP-ECM, but uses a different implementation and a different algorithm for the scalar multiplication. As there is no modular arithmetic library (like GMP) available for GPU, it was necessary to implement a modular arithmetic using array of unsigned integers from scratch, while taking into account constraints of GPU programming. The code, written for NVIDIA GPUs using CUDA, was optimized for factoring 1024 bits integers.

# Friday October 28, 2011

Using Galois cohomology, Schmoyer characterizes cryptographic non-trivial self-pairings of the $\ell$-Tate pairing in terms of the action of the Frobenius on the $\ell$-torsion of the Jacobian of a genus 2 curve. We apply similar techniques to study the non-degeneracy of the $\ell$-Tate pairing restrained to subgroups of the $\ell$-torsion which are maximal isotropic with respect to the Weil pairing. First, we deduce a criterion to verify whether the jacobian of a genus 2 curve has maximal endomorphism ring. Secondly, we derive a method to construct horizontal ($\ell$, $\ell$)-isogenies starting from a jacobian with maximal endomorphism ring. This is joint work with Ben Smith.

# Thursday October 6, 2011

We consider the problem of short division — i.e., approximate quotient — of multiple-precision integers. We present ready-to-implement algorithms that yield an approximation of the quotient, with tight and rigorous error bounds. We exhibit speedups of up to 30% with respect to GMP division with remainder, and up to 10% with respect to GMP short division, with room for further improvements. This work enables one to implement fast correctly rounded division routines in multiple-precision software tools.

# Thursday September 29, 2011

In this talk, we will describe an efficient software implementation of characteristic 2 fields making extensive use of vector instruction sets commonly found in desktop processors. Field elements are represented in a split form so performance-critical field operations can be formulated in terms of simple operations over 4-bit sets. In particular, we detail techniques for implementing field multiplication, squaring, square root extraction, half-trace and inversion and present a constant-memory lookup-based multiplication strategy. We illustrate performance with timings for scalar multiplication on a 251-bit curve and compare our results with publicly available benchmarking data.

# Wednesday July 13, 2011

# Thursday June 30, 2011

# Thursday June 9, 2011

# Tuesday June 7, 2011

# Monday May 30, 2011

# Monday May 30, 2011

# Wednesday May 18, 2011

In 2008, Dolgachev and Lehavi published a method for constructing $(\ell,\ell)$-isogenies of Jacobians of genus 2 curves. The heart of their construction requires only elementary projective geometry and some basic facts about abelian varieties. In this talk, we put their method into practice, and consider what needs to be done to transform their method into a practical algorithm for curves over finite fields.

# Thursday May 5, 2011

Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner in 1991 seems to achieve the best time/quality compromise in practice. However, no reasonable time complexity upper bound was known so far for BKZ. We give a proof that after $\widetilde{O}(n^{3}/k^{2})$ calls to a $k$-dimensional HKZ reduction subroutine, $\operatorname{BKZ}_{k}$ returns a basis such that the norm of the first vector is at most $\sim \gamma_{k}^{n/2(k-1)} \times \det(L)^{1/n}$. The main ingredient of the proof is the analysis of a linear dynamic system related to the algorithm.

# Thursday April 21, 2011

# Thursday April 14, 2011

# Wednesday March 30, 2011

In this talk we will give an overview of the M4RI and M4RIE libraries. These open-source libraries are dedicated to efficient linear algebra over small finite fields with characteristic two. We will present and discuss implemented algorithms, implementation issues that arise for these fields and also some ongoing and future work. We will also demonstrate the viability of our approach by comparing the performance of our libraries with the implementation in Magma.

# Thursday February 17, 2011

In the last years the capacity of modern FPGAs devices has increased to the point that they can be used with success for accelerating various floating-point computations. However, ease of programmability has never rhymed with FPGAs. Obtaining good accelerations was most of the time a laborious and error-prone process.

This talk is addressing the programmability of arithmetic circuits on FPGAs. We present FloPoCo, a framework facilitating the design of custom arithmetic datapaths for FPGAs. Some of the features provided by FloPoCo are: an important basis of highly optimized arithmetic operators, a unique methodology for separating arithmetic operator design from frequency-directed pipelining the designed circuits and a flexible test-bench generation suite for numerically validating the designs.

The framework is reaching maturity, so far being tested with success for designing several complex arithmetic operators including the floating-point square-root, exponential and logarithm functions. Synthesis results capture the designed operator’s flexibility: automatically optimized for several Altera and Xilinx FPGAs, wide range of target frequencies and several precisions ranging from single to quadruple precision.

# Thursday January 27, 2011

# Friday January 21, 2011

In this talk, I will review the concept of fully homomorphic encryption, describe its possibilities and then give a construction based on principal ideals in number rings. This is joint work with Nigel Smart.

# Thursday January 20, 2011

The embedded security community has been looking at the ECC ever since it was introduced. Hardware designers are now challenged by limited area (<15 kGates), low power budget (<100 μW) and sophisticated physical attacks. This talk will report the state-of-the-art ECC implementations for ultra-constrained devices. We take a passive RFID tag as our potential target. We will discuss the known techniques to realize ECC on such kind of devices, and what are the challenges we face now and in the near future.

# Thursday December 9, 2010

# Thursday December 2, 2010

# Thursday November 25, 2010

# Monday November 8, 2010

# Thursday November 4, 2010

Following the NIST recommendations for Key Management (SP800-57) and the DRAFT recommendation for the Transitioning of Cryptographic Algorithms and Key Sizes (SP 800-131), the use of RSA-1024 will be deprecated from January 1, 2011. Major vendors and enterprises will start the transition to the next minimum RSA key size of 2048 bits, which is computationally much more expensive. This talk presents an implementation of RSA-2048 on GPUs and explores the possibility to alleviate the computationally overhead by offloading the operations on GPUs.

# Thursday October 14, 2010

# Friday July 30, 2010

# Thursday July 15, 2010

# Thursday July 15, 2010

# Thursday June 17, 2010

# Thursday June 3, 2010

# Friday May 28, 2010

# Wednesday April 28, 2010

# Monday April 26, 2010

# Friday April 9, 2010

Performing numerical computations, yet being able to provide rigorous mathematical statements about the obtained result, is required in many domains like global optimization, ODE solving or integration. Taylor models, which associate to a function a pair made of a Taylor approximation polynomial and a rigorous remainder bound, are a widely used rigorous computation tool. This approach benefits from the advantages of numerical methods, but also gives the ability to make reliable statements about the approximated function.

A natural idea is to try to replace Taylor polynomials with better approximations such as minimax approximation, Chebyshev truncated series or interpolation polynomials. Despite their features, an analogous to Taylor models, based on such polynomials, has not been yet well-established in the field of validated numerics. In this talk we propose two approaches for computing such models based on interpolation polynomials at Chebyshev nodes.

We compare the quality of the obtained remainders and the performance of the approaches to the ones provided by Taylor models. We also present two practical examples where this tool can be used: supremum norm computation of approximation errors and rigorous quadrature.

This talk is based on a joint work with N. Brisebarre.

# Wednesday March 31, 2010

In order to “increase the cryptographic community’s understanding and appreciation of the difficulty of the ECDLP”, Certicom issued several elliptic-curve discrete-logarithm challenges. After these challenges were published in 1997 the easy ones of less than 100 bits were soon solved — the last one in 1999.

In 2004 also all challenges of size 109 bits were solved but the 131-bit challenges have so far all not been successfully targeted.

Since end of 2009 a group of several institutions is trying to solve the challenge ECC2K-130, a discrete-logarithm problem on a Koblitz curve over the field $\mathbb{F}$2^{131}. In my talk I will describe the approach taken to solve this challenge and give details of the Pollard rho iteration function. Furthermore I will give implementation details on two different platforms, namely the Cell Broadband Engine and NVIDIA GPUs.

# Friday March 26, 2010

# Friday March 19, 2010

Isogenies are an important tool in the study of elliptic curves. As such their applications in Elliptic Curve Cryptography are numerous, ranging from point counting to new cryptographic schemes.

The problem of finding explicit formulae expressing an isogeny between two elliptic curves has been studied by many. Vélu gave formulae for the case where the curves are defined over $\mathbb{C}$; these formulae have been extended in works by Morain, Atkin and Charlap, Coley & Robbins to compute isogenies in the case where the characteristic of the field is larger than the degree of the isogeny.

The small characteristic case requires another treatment. Algorithms by Couveignes, Lercier, Joux & Lercier, Lercier & Sirvent give solutions to different instances of the problem. We review these strategies, then we present an improved algorithm based over Couveignes’ ideas and we compare its performance to the other ones.

# Friday March 5, 2010

To generate a genus 2 curve that is suitable for use in cryptography, one approach is to repeatedly pick a curve at random until its Jacobian has prime (or almost prime) order. Naively, one would expect that the probability of success is comparable to the probability that a randomly chosen integer in the according Weil interval is prime (or almost prime). However, in the elliptic curve case it was observed by Galbraith and McKee that large prime factors are disfavoured. They gave a conjectural formula that quantifies this effect, along with a heuristic proof, based on the Hurwitz–Kronecker class number formula. In this talk, I will provide alternative heuristics in favour of the Galbraith–McKee formula, that seem better-suited for generalizations to curves of higher genus. I will then elaborate this for genus 2. This is joint (and ongoing) research with Hendrik Hubrechts and Alessandra Rigato.

# Tuesday March 2, 2010

In the first part of my talk I am going to introduce the classical class invariants of Weber, and their generalizations, as quotients of values of "Thetanullwerte", which enables to compute them more efficiently than as quotients of values of the Dedekind $\eta$-function. Moreover, the proof that most of the invariants introduced by Weber are actually units in the corresponding ring class fields will be given, which allows to obtain better class invariants in some cases, and to give an algorithm that computes the unit group of corresponding ring class fields.

In the second part, using higher degree reciprocity law I am going to introduce the possibility of generalizing the algorithmic approach of determining class invariants for elliptic curves with CM, to determining alternative class invariant systems for principally polarized simple abelian surfaces with CM.

# Thursday February 11, 2010

# Monday February 1, 2010

# Thursday December 10, 2009

# Friday September 25, 2009

We present a FDECM algorithm allowing to remove — if they exist — all prime factors less than 2^{32} from a composite input number $n$. Trying to remove those factors naively either by trial-division or by multiplying together all primes less than 2^{32}, then doing a GCD with this product both prove extremely slow and are unpractical. We will show in this article that with FDECM it costs about a hundred well-chosen elliptic curves, which can be very fast in an optimized ECM implementation with optimized $B_{1}$ and $B_{2}$ smoothness bounds. The speed varies with the size of the input number $n$. Special attention has also been paid so that our FDECM be the most implementation-independent possible by choosing a widespread elliptic-curve parametrization and carefully checking all results for smoothness with Magma. Finally, we have considered possible optimizations to FDECM first by using a rational family of parameters for ECM and then by determining when it is best to switch from ECM to GCD depending on the size of the input number $n$. To the best of our knowledge, this is the first detailed description of a fully deterministic ECM algorithm.

# Wednesday September 23, 2009

# Thursday September 17, 2009

This presentation is about a kind of ordinary elliptic curves introduced by Scott at INDOCRYPT 2005. These curves’ CM discriminant is -3 or -1, then they have endomorphism for reducing Miller’s loop length to half. These curves are also restricted in terms of the form of group order. Therefore, these are generated by Cocks-Pinch method. Cocks-Pinch method is a general method to obtain elliptic curve parameters with rho-value approximately 2. This method enables to fix group order, CM disciminant and embedding degree in advance as long as they meet the requirements. Elliptic curves introduced by Scott with CM discriminant -3, they were investigated by Scott and Takashima but CM discriminant -1 are not. In this presentation, we show the result of generating curve parameters with CM discriminant -1 and what amount of parameters meet the requirements.

# Tuesday June 23, 2009

One of the primary uses of lattice reduction algorithms is to approximate short vectors in a lattice. I present a new algorithm which produces approximations of short vectors in certain lattices. It does this by generating a reduced basis of a sub-lattice which is guaranteed to contain all short vectors in the given lattice. This algorithm has a complexity which is less dependent on the size of the input basis vectors and more dependent on the size of the output vectors.

To illustrate the usefulness of the new algorithm I will show how it can be used to give new complexity bounds for factoring polynomials in $\mathbb{Z}[x]$ and reconstructing algebraic numbers from approximations.

# Monday May 18, 2009

# Thursday April 30, 2009

The Maximal Determinant Problem was first posed around 1898. It asks for a square matrix of largest possible determinant, with the entries of the matrix restricted to be drawn from the set {0, 1}, or equivalently {+1, -1}.

Emperical investigations show an intriguing amount of structure in this problem, both in the numerical sequence of maximal determinants, and in the corresponding maximal determinant matrices themselves. But naive brute force search becomes infeasible beyond very small orders, due to the exponential nature of the search space.

High and maximal determinant matrices are useful in applications, particularly in statistics, which is one reason why it is desirable to have at hand a means of constructing these matrices. For certain sparse infinite subsequences of orders, constructive algorithms have been found

- some relating to finite fields. However progress over the last one hundred years has been distinctly patchy, depending on elementary number theoretic properties of the matrix order: particularly its remainder upon division by four.

We discuss ways of setting up computations which may be feasible with current computing power and yet still yield new maximal determinant matrices that would not be accessible to a naive search.

# Thursday March 26, 2009

Originally introduced in cryptography by Menezes, Okamoto and Vanstone (1993) then Frey and Rück (1994) to attack the discrete logarithm problem over a particular class of elliptic curves, pairings have since then been put to a constructive use in various useful cryptographic protocols such as short digital signature or identity-based encryption. However, evaluating these pairings relies heavily on finite field arithmetic, and their computation in software is still expensive. Developing hardware accelerators is therefore crucial.

In the second part of this double-talk, we will focus on the other end of the hardware design spectrum. While the first part (given by Jean-Luc Beuchat) presented a co-processor which, although quite slow, would strive to minimize the amount of hardware resources required to compute the Tate pairing, in this second part we will describe another co-processor architecture, designed to achieve much lower computation times, at the expense of hardware resources.

# Thursday February 26, 2009

Originally introduced in cryptography by Menezes, Okamoto and Vanstone (1993) then Frey and Rück (1994) to attack the discrete logarithm problem over a particular class of elliptic curves, pairings have since then been put to a constructive use in various useful cryptographic protocols such as short digital signature or identity-based encryption. However, evaluating these pairings relies heavily on finite field arithmetic, and their computation in software is still expensive. Developing hardware accelerators is therefore crucial.

In this talk, we will then present a hardware co-processor designed to accelerate the computation of the Tate pairing in characteristics 2 and 3. As the title suggests, this talk will emphasize on reducing the silicon footprint (or in our case the usage of FPGA resources) of the circuit to ensure scalability, while trying to minimize the impact on the overall performances.

# Thursday November 6, 2008

# Thursday October 16, 2008

# Thursday June 26, 2008

Triangular representations are a versatile data structure; however, even basic arithmetic operations raise difficult questions with such objects. I will present an algorithm for multiplication modulo a triangular set that relies on deformation techniques and ultimately evaluation and interpolation. It features a quasi-linear running time (without hidden exponential factor), at least in some nice cases. More or less successful applications include polynomial multiplication, operations on algebraic numbers and arithmetic in Artin-Schreier extensions.

# Friday May 23, 2008

http://www.jjj.de/arctan/arctanpage.html

# Wednesday May 14, 2008

# Thursday April 10, 2008

# Tuesday March 25, 2008

# Thursday February 14, 2008

# Thursday February 7, 2008

# Thursday January 31, 2008

In 1996, Coppersmith introduced two lattice reduction based techniques to find small roots in polynomial equations. One technique works for modular univariate polynomials, the other for bivariate polynomials over the integers. Since then, these methods have been used in a huge variety of cryptanalytic applications. Some applications also use extensions of Coppersmith’s techniques on more variables. However, these extensions are heuristic methods.

In this presentation, we present and analyze a new variation of Coppersmith’s algorithm on three variables over the integers. We also study the applicability of our method to short RSA exponents attacks. In addition to lattice reduction techniques, our method also uses Gröbner bases computations. Moreover, at least in principle, it can be generalized to four or more variables.

# Thursday January 17, 2008

# Wednesday December 5, 2007

# Thursday November 29, 2007

# Thursday November 8, 2007

# Monday June 18, 2007

Since their introduction in constructive cryptographic applications, pairings over (hyper)elliptic curves are at the heart of an ever increasing number of protocols. Software implementations being rather slow, the study of hardware architectures became an active research area. In this talk, we first describe an accelerator for the $\eta_T$ pairing over $\mathbb{F}_3[x]/(x^{97}+x^{12}+2)$. Our architecture is based on a unified arithmetic operator which performs addition, multiplication, and cubing over $\mathbb{F}_{3^{97}}$. This design methodology allows us to design a compact coprocessor ($1888$ slices on a Virtex-II Pro~$4$ FPGA) which compares favorably with other solutions described in the open literature. We then describe ways to extend our approach to any characteristic and any extension field.

The talk will be based on the following research reports:

# Thursday June 14, 2007

Let K be an imaginary quadratic field with Hilbert class field H and maximal order OK. We consider elliptic curves E defined over H with the properties that the endomorphism ring of E is isomorphic to OK and E is isogenous to E over H for all \sigma\in Gal(H/K). Taking the Weil restriction W_{H/K} of such an E from H to K, one obtains an abelian variety whose endomorphism ring will be either a field or a quaternion algebra. The question of which quaternion algebras may be obtained in this way is one of our motivations.

For quaternion algebras to occur, the class group of K must have non-cyclic 2-Sylow subgroup, the simplest possible examples occuring when K has class number 4. In this case, investigating when W_{H/K}(E) has a non-abelian endomorphism algebra is closely related to finding extensions L/H such that Gal(L/K) is either the dihedral or quaternion group of order 8.

# Thursday June 7, 2007

# Tuesday June 5, 2007

The $j$-invariant of an elliptic curve with complex multiplication by $K$ is well-known to generate the Hilbert class field of $K$. Such $j$-invariants, or rather their minimal polynomials in $\mathbb Z[x]$, can be determined by means of complex analytic methods from a given CM lattice in $\mathbb C$. A construction of CM moduli by $p$-adic lifting techniques was introduced by Couveignes and Henocq. Efficient versions of one-dimensional $p$-adic lifting were developed by Bröker. These methods provide an alternative application of $p$-adic canonical lifts, as introduced by Satoh for determining the zeta function of an elliptic curves $E/\mathbb F_{p^n}$.

Construction of such defining polynomials for CM curves is an area of active interest for use in cryptographic constructions. Together with Gaudry, Houtmann, Ritzenthaler, and Weng, we generalised the elliptic curve CM construction to genus 2 curves using $2$-adic canonical lifts. The output of this algorithm is data specifying a defining ideal for the CM Igusa invariants $(j_1,j_2,j_3)$ in $\mathbb Z[x_1,x_2,x_3]$. In contrast to Mestre’s AGM algorithm for determining zeta functions of genus 2 curves $C/\mathbb F_{2^n}$, this construction pursues the alternative application of canonical lifts to CM constructions. With Carls and Lubicz, I developed an analogous $3$-adic CM construction using theta functions. In this talk I will report on recent progress and challenges in extending and improving these algorithms.

# Thursday April 26, 2007

# Tuesday April 17, 2007

Schönhage proposed in the paper “Schnelle Multiplikation von Polynomen über Körpern der Charakteristik 2” (Acta Informatica, 1977) an O(n log(n) log(log(n))) algorithm to multiply polynomials over GF(2)[x]. We describe that algorithm and report on its implementation in NTL.

# Tuesday April 17, 2007

We describe a new multi-level blocking algorithm for distinct-degree factorization of polynomials over GF(2). The idea of the algorithm is to use one level of blocking to replace most GCD computations by multiplications, and a finer level of blocking to replace most multiplications by squarings (which are much faster than multiplications over GF(2)). As an application we give an algorithm that searches for all irreducible trinomials of given degree. Under plausible assumptions, the expected running time of this algorithm is much less than that of the classical algorithm. For example, our implementation gives a speedup of more than 60 over the classical algorithm for trinomials of degree 6972593 (a Mersenne exponent). [Joint work with Paul Zimmermann.]

# Thursday March 15, 2007

Isogenies — surjective homomorphisms of algebraic groups with finite kernel — are of great interest in number theory and cryptography. Algorithms for computing with isogenies of elliptic curves are well-known, but in higher dimensions, the situation is more complicated, and few explicit examples of non-trivial isogenies are known. We will discuss some of the computational issues, and describe some examples and applications of isogenies of Jacobians of hyperelliptic curves.

# Thursday March 8, 2007

# Thursday February 8, 2007

# Tuesday January 16, 2007

# Tuesday January 16, 2007

# Tuesday January 16, 2007

# Wednesday November 29, 2006

Shift registers are very common hardware devices. They are always associated with a combinatorial/sequential feedback. Linear Feedback Shift Registers (LFSRs) are certainly the most famous setup amongst those circuits. LFSRs are used everywhere in communication systems: scramblers, stream-ciphers, spread spectrum, Built-in Self Test (BIST)… Despite their popularity, the impact of LFSRs characteristics has never been clearly studied. I have studied the LFSRs synthesis on Xilinx Spartan2E FPGA with several goals (area, critical path, throughput). Studying high throughtput synthesis is particularly interesting since it is a circuitous way to study software synthesis. I will describe which properties can be observed for both synthesis. In conclusion, other shift registers setups will be considered like Feedback with Carry Shift Registers (FCSRs) or Non Linear Feedback Shift Registers (NLFSRs).

# Tuesday October 10, 2006

Using Deurings theory of correspondences we are able to construct homomorphisms between the degree zero classgroups of function fields. Correspondences are divisors of function fields with transcendent constant fields of degree one. They form an entire ring which is for example in the case of elliptic function fields isomorphic to an order of an imaginary quadratic number field. In this talk we show how to compute endomporphisms of elliptic and hyperelliptic curves using correspondences.

# Friday April 14, 2006

# Thursday April 6, 2006

# Monday March 27, 2006

In a factor ring, i.e., in a polynomial ring F[z]/(m) or the integer ring Z_n, the conventional way of performing division a/b consists of two steps: first, the inverse b^{-1} is computed and then the product ab^{-1}. In this talk we describe a technique called “direct division” which computes the division a/b for given a,b directly, only using addition and multiplication in the underlying structure, i.e., finite field operations in F for the polynomial ring F[z]/(m) and addition and multiplication by 2 in the integer ring Z_n. This technique requires that the module m is not divisible by z, and the module n is odd.

# Thursday March 23, 2006

# Thursday March 16, 2006

TBA

# Thursday February 16, 2006

# Thursday January 5, 2006

# Thursday November 24, 2005

The enhanced standard continuation of the $p-1$, $p+1$ and ECM factoring methods chooses pairs $(a,b)$, where $a$ is a multiple of a suitably chosen $d$, so that every prime in the desired stage 2 interval can be written as $a-b$. Montgomery [1] showed how to include more than one prime per $(a,b)$ pair by instead evaluating $f(a)-f(b)$ so that this bivariate polynomial has algebraic factors. However, he restricts his analysis to the algebraic factors $a-b$ and $a+b$, and considers only prime values taken by these factors. We present a framework for generalising Montgomery’s ideas by choosing $(a,b)$ pairs as nodes in a partial cover of a bipartite graph, which allows utilising large prime factors of composite values, and algebraic factors of higher degree.

[1] P. L. Montgomery, Speeding the Pollard and elliptic Curve Methods of factorization, Math. Comp. 48 (177), 1987.

# Wednesday November 23, 2005

Let K be a p-adic field. We give an explicit characterization of the abelian extensions of K of degree p by relating the coefficients of the generating polynomials of extensions L/K of degree p to the norm group N_{L/K}(L^*). This is applied in the construction of class fields of degree p^m.

# Tuesday November 8, 2005

to be announced

# Thursday October 13, 2005

# Thursday October 6, 2005

The talk surveys various algorithms for computing in the divisor class groups of general non singular curves and gives a running time discussion.

# Thursday September 29, 2005

# Thursday September 29, 2005

# Thursday September 22, 2005

# Thursday September 15, 2005

# Tuesday June 21, 2005

# Tuesday June 21, 2005

# Friday June 3, 2005

# Thursday April 14, 2005

Digital signatures have the sometimes unwanted property of being universally verifiable by anybody having access to the signer’s public key. In recent work with F. Laguillaumie and P. Pailler, we have proposed a signature scheme where the verification requires interaction with the signer. Its security relies on the « xyz » variant of the classical Diffie-Hellman problem. We present in this talk the underlying algorithmical problem within its cryptographical context, and give some assessment of its difficulty

# Thursday April 7, 2005

# Wednesday March 23, 2005

# Thursday March 10, 2005

# Thursday March 3, 2005

A curve of genus $g$ defined over the complex field $\mathbb{C}$ is isomorphic to a torus with $g$ holes, or equivalently to a quotient of the form $\mathbb{C}^g/(\mathbb{Z}^g\oplus\mathbb Z^g\tau)$, $\tau$ being a $g\times g$ matrix called a Riemann matrix.

When the genus $g$ equals one, the computation of $\tau$ from the equation of an elliptic curve is one of the classical applications of the arithmetico-geometric mean (AGM). The AGM can be interpreted using functions called theta constants.

We show how this special case extends to higher genus, using a generalization of the AGM known as the Borchardt mean.

In particular, we develop an algorithm for computing genus 2 Riemann matrices in almost linear time. This algorithm can be implemented easily.

As we also show, this technique allows for rapid computation of modular forms and functions, and we discuss the applications thereof (construction of CM curves, explicit computation of isogenies, …).