8 research outputs found

    Faster computation of isogenies of large prime degree

    Get PDF
    International audienceLet E/Fq\mathcal{E}/\mathbb{F}_q be an elliptic curve, and PP a point in E(Fq)\mathcal{E}(\mathbb{F}_q) of prime order \ell.Vélu's formulae let us compute a quotient curve E=E/P\mathcal{E}' = \mathcal{E}/\langle{P}\rangle and rational maps defining a quotient isogeny ϕ:EE\phi: \mathcal{E} \to \mathcal{E}' in O~()\tilde{O}(\ell) Fq\mathbb{F}_q-operations, where the O~\tilde{O} is uniform in qq.This article shows how to compute E\mathcal{E}', and ϕ(Q)\phi(Q) for QQ in E(Fq)\mathcal{E}(\mathbb{F}_q), using only O~()\tilde{O}(\sqrt{\ell}) Fq\mathbb{F}_q-operations, where the O~\tilde{O} is again uniform in qq.As an application, this article speeds up some computations used in the isogeny-based cryptosystems CSIDH and CSURF

    Primary Elements in Cyclotomic Fields with Applications to Power Residue Symbols, and More

    Get PDF
    Higher-order power residues have enabled the construction of numerous public-key encryption schemes, authentication schemes, and digital signatures. Their explicit characterization is however challenging; an algorithm of Caranay and Scheidler computes pp-th power residue symbols, with p13p \le 13 an odd prime, provided that primary elements in the corresponding cyclotomic field can be efficiently found. In this paper, we describe a new, generic algorithm to compute primary elements in cyclotomic fields; which we apply for p=3,5,7,11,13p=3,5,7,11,13. A key insight is a careful selection of fundamental units as put forward by Dénes. This solves an essential step in the Caranay--Scheidler algorithm. We give a unified view of the problem. Finally, we provide the first efficient deterministic algorithm for the computation of the 9-th and 16-th power residue symbols

    Part I:

    Get PDF

    Public Key Infrastructure

    Full text link

    On the security of biquadratic C∗ public-key cryptosystems and its generalizations

    No full text

    Assessing, testing, and challenging the computational power of quantum devices

    Get PDF
    Randomness is an intrinsic feature of quantum theory. The outcome of any measurement will be random, sampled from a probability distribution that is defined by the measured quantum state. The task of sampling from a prescribed probability distribution therefore seems to be a natural technological application of quantum devices. And indeed, certain random sampling tasks have been proposed to experimentally demonstrate the speedup of quantum over classical computation, so-called “quantum computational supremacy”. In the research presented in this thesis, I investigate the complexity-theoretic and physical foundations of quantum sampling algorithms. Using the theory of computational complexity, I assess the computational power of natural quantum simulators and close loopholes in the complexity-theoretic argument for the classical intractability of quantum samplers (Part I). In particular, I prove anticoncentration for quantum circuit families that give rise to a 2-design and review methods for proving average-case hardness. I present quantum random sampling schemes that are tailored to large-scale quantum simulation hardware but at the same time rise up to the highest standard in terms of their complexity-theoretic underpinning. Using methods from property testing and quantum system identification, I shed light on the question, how and under which conditions quantum sampling devices can be tested or verified in regimes that are not simulable on classical computers (Part II). I present a no-go result that prevents efficient verification of quantum random sampling schemes as well as approaches using which this no-go result can be circumvented. In particular, I develop fully efficient verification protocols in what I call the measurement-device-dependent scenario in which single-qubit measurements are assumed to function with high accuracy. Finally, I try to understand the physical mechanisms governing the computational boundary between classical and quantum computing devices by challenging their computational power using tools from computational physics and the theory of computational complexity (Part III). I develop efficiently computable measures of the infamous Monte Carlo sign problem and assess those measures both in terms of their practicability as a tool for alleviating or easing the sign problem and the computational complexity of this task. An overarching theme of the thesis is the quantum sign problem which arises due to destructive interference between paths – an intrinsically quantum effect. The (non-)existence of a sign problem takes on the role as a criterion which delineates the boundary between classical and quantum computing devices. I begin the thesis by identifying the quantum sign problem as a root of the computational intractability of quantum output probabilities. It turns out that the intricate structure of the probability distributions the sign problem gives rise to, prohibits their verification from few samples. In an ironic twist, I show that assessing the intrinsic sign problem of a quantum system is again an intractable problem
    corecore