88 research outputs found

    Multiplication in Finite Fields and Elliptic Curves

    Get PDF
    La cryptographie à clef publique permet de s'échanger des clefs de façon distante, d'effectuer des signatures électroniques, de s'authentifier à distance, etc. Dans cette thèse d'HDR nous allons présenter quelques contributions concernant l'implantation sûre et efficace de protocoles cryptographiques basés sur les courbes elliptiques. L'opération de base effectuée dans ces protocoles est la multiplication scalaire d'un point de la courbe. Chaque multiplication scalaire nécessite plusieurs milliers d'opérations dans un corps fini.Dans la première partie du manuscrit nous nous intéressons à la multiplication dans les corps finis car c'est l'opération la plus coûteuse et la plus utilisée. Nous présentons d'abord des contributions sur les multiplieurs parallèles dans les corps binaires. Un premier résultat concerne l'approche sous-quadratique dans une base normale optimale de type 2. Plus précisément, nous améliorons un multiplieur basé sur un produit de matrice de Toeplitz avec un vecteur en utilisant une recombinaison des blocs qui supprime certains calculs redondants. Nous présentons aussi un multiplieur pous les corps binaires basé sur une extension d'une optimisation de la multiplication polynomiale de Karatsuba.Ensuite nous présentons des résultats concernant la multiplication dans un corps premier. Nous présentons en particulier une approche de type Montgomery pour la multiplication dans une base adaptée à l'arithmétique modulaire. Cette approche cible la multiplication modulo un premier aléatoire. Nous présentons alors une méthode pour la multiplication dans des corps utilisés dans la cryptographie sur les couplages : les extensions de petits degrés d'un corps premier aléatoire. Cette méthode utilise une base adaptée engendrée par une racine de l'unité facilitant la multiplication polynomiale basée sur la FFT. Dans la dernière partie de cette thèse d'HDR nous nous intéressons à des résultats qui concernent la multiplication scalaire sur les courbes elliptiques. Nous présentons une parallélisation de l'échelle binaire de Montgomery dans le cas de E(GF(2^n)). Nous survolons aussi quelques contributions sur des formules de division par 3 dans E(GF(3^n)) et une parallélisation de type (third,triple)-and-add. Dans le dernier chapitre nous développons quelques directions de recherches futures. Nous discutons d'abord de possibles extensions des travaux faits sur les corps binaires. Nous présentons aussi des axes de recherche liés à la randomisation de l'arithmétique qui permet une protection contre les attaques matérielles

    Shear-induced rigidity of frictional particles: Analysis of emergent order in stress space

    Full text link
    Solids are distinguished from fluids by their ability to resist shear. In traditional solids, the resistance to shear is associated with the emergence of broken translational symmetry as exhibited by a non-uniform density pattern, which results from either minimizing the energy cost or maximizing the entropy or both. In this work, we focus on a class of systems, where this paradigm is challenged. We show that shear-driven jamming in dry granular materials is a collective process controlled solely by the constraints of mechanical equilibrium. We argue that these constraints lead to a broken translational symmetry in a dual space that encodes the statistics of contact forces and the topology of the contact network. The shear-jamming transition is marked by the appearance of this broken symmetry. We extend our earlier work, by comparing and contrasting real space measures of rheology with those obtained from the dual space. We investigate the structure and behavior of the dual space as the system evolves through the rigidity transition in two different shear protocols. We analyze the robustness of the shear-jamming scenario with respect to protocol and packing fraction, and demonstrate that it is possible to define a protocol-independent order parameter in this dual space, which signals the onset of rigidity.Comment: 14 pages, 17 figure

    Methods for parallel quantum circuit synthesis, fault-tolerant quantum RAM, and quantum state tomography

    Get PDF
    The pace of innovation in quantum information science has recently exploded due to the hope that a quantum computer will be able to solve a multitude of problems that are intractable using classical hardware. Current quantum devices are in what has been termed the ``noisy intermediate-scale quantum'', or NISQ stage. Quantum hardware available today with 50-100 physical qubits may be among the first to demonstrate a quantum advantage. However, there are many challenges to overcome, such as dealing with noise, lowering error rates, improving coherence times, and scalability. We are at a time in the field where minimization of resources is critical so that we can run our algorithms sooner rather than later. Running quantum algorithms ``at scale'' incurs a massive amount of resources, from the number of qubits required to the circuit depth. A large amount of this is due to the need to implement operations fault-tolerantly using error-correcting codes. For one, to run an algorithm we must be able to efficiently read in and output data. Fault-tolerantly implementing quantum memories may become an input bottleneck for quantum algorithms, including many which would otherwise yield massive improvements in algorithm complexity. We will also need efficient methods for tomography to characterize and verify our processes and outputs. Researchers will require tools to automate the design of large quantum algorithms, to compile, optimize, and verify their circuits, and to do so in a way that minimizes operations that are expensive in a fault-tolerant setting. Finally, we will also need overarching frameworks to characterize the resource requirements themselves. Such tools must be easily adaptable to new developments in the field, and allow users to explore tradeoffs between their parameters of interest. This thesis contains three contributions to this effort: improving circuit synthesis using large-scale parallelization; designing circuits for quantum random-access memories and analyzing various time/space tradeoffs; using the mathematical structure of discrete phase space to select subsets of tomographic measurements. For each topic the theoretical work is supplemented by a software package intended to allow others researchers to easily verify, use, and expand upon the techniques herein

    Joint shape and motion estimation from echo-based sensor data

    Get PDF
    2018 Fall.Includes bibliographical references.Given a set of time-series data collected from echo-based ranging sensors, we study the problem of jointly estimating the shape and motion of the target under observation when the sensor positions are also unknown. Using an approach first described by Stuff et al., we model the target as a point configuration in Euclidean space and estimate geometric invariants of the configuration. The geometric invariants allow us to estimate the target shape, from which we can estimate the motion of the target relative to the sensor position. This work will unify the various geometric- invariant based shape and motion estimation literature under a common framework, and extend that framework to include results for passive, bistatic sensor systems

    Study of compression techniques for partial differential equation solvers

    Get PDF
    Partial Differential Equations (PDEs) are widely applied in many branches of science, and solving them efficiently, from a computational point of view, is one of the cornerstones of modern computational science. The finite element (FE) method is a popular numerical technique for calculating approximate solutions to PDEs. A not necessarily complex finite element analysis containing substructures can easily gen-erate enormous quantities of elements that hinder and slow down simulations. Therefore, compression methods are required to decrease the amount of computational effort while retaining the significant dynamics of the problem. In this study, it was decided to apply a purely algebraic approach. Various methods will be included and discussed, ranging from research-level techniques to other apparently unrelated fields like image compression, via the discrete Fourier transform (DFT) and the Wavelet transform or the Singular Value Decomposition (SVD)

    Quark confinement: dual superconductor picture based on a non-Abelian Stokes theorem and reformulations of Yang-Mills theory

    Full text link
    The purpose of this paper is to review the recent progress in understanding quark confinement. The emphasis of this review is placed on how to obtain a manifestly gauge-independent picture for quark confinement supporting the dual superconductivity in the Yang-Mills theory, which should be compared with the Abelian projection proposed by 't Hooft. The basic tools are novel reformulations of the Yang-Mills theory based on change of variables extending the decomposition of the SU(N)SU(N) Yang-Mills field due to Cho, Duan-Ge and Faddeev-Niemi, together with the combined use of extended versions of the Diakonov-Petrov version of the non-Abelian Stokes theorem for the SU(N)SU(N) Wilson loop operator. Moreover, we give the lattice gauge theoretical versions of the reformulation of the Yang-Mills theory which enables us to perform the numerical simulations on the lattice. In fact, we present some numerical evidences for supporting the dual superconductivity for quark confinement. The numerical simulations include the derivation of the linear potential for static interquark potential, i.e., non-vanishing string tension, in which the "Abelian" dominance and magnetic monopole dominance are established, confirmation of the dual Meissner effect by measuring the chromoelectric flux tube between quark-antiquark pair, the induced magnetic-monopole current, and the type of dual superconductivity, etc. In addition, we give a direct connection between the topological configuration of the Yang-Mills field such as instantons/merons and the magnetic monopole.Comment: 304 pages; 62 figures and 13 tables; a version published in Physics Reports, including corrections of errors in v
    • …
    corecore