10 research outputs found

    Factoring Products of Braids via Garside Normal Form

    Get PDF
    Braid groups are infinite non-abelian groups naturally arising from geometric braids. For two decades they have been proposed for cryptographic use. In braid group cryptography public braids often contain secret braids as factors and it is hoped that rewriting the product of braid words hides individual factors. We provide experimental evidence that this is in general not the case and argue that under certain conditions parts of the Garside normal form of factors can be found in the Garside normal form of their product. This observation can be exploited to decompose products of braids of the form ABC when only B is known. Our decomposition algorithm yields a universal forgery attack on WalnutDSA™, which is one of the 20 proposed signature schemes that are being considered by NIST for standardization of quantum-resistant public-key cryptography. Our attack on WalnutDSA™ can universally forge signatures within seconds for both the 128-bit and 256-bit security level, given one random message-signature pair. The attack worked on 99.8% and 100% of signatures for the 128-bit and 256-bit security levels in our experiments. Furthermore, we show that the decomposition algorithm can be used to solve instances of the conjugacy search problem and decomposition search problem in braid groups. These problems are at the heart of other cryptographic schemes based on braid groups.SCOPUS: cp.kinfo:eu-repo/semantics/published22nd IACR International Conference on Practice and Theory of Public-Key Cryptography, PKC 2019; Beijing; China; 14 April 2019 through 17 April 2019ISBN: 978-303017258-9Volume Editors: Sako K.Lin D.Publisher: Springer Verla

    Does Fiat-Shamir Require a Cryptographic Hash Function?

    Get PDF
    The Fiat-Shamir transform is a general method for reducing interaction in public-coin protocols by replacing the random verifier messages with deterministic hashes of the protocol transcript. The soundness of this transformation is usually heuristic and lacks a formal security proof. Instead, to argue security, one can rely on the random oracle methodology, which informally states that whenever a random oracle soundly instantiates Fiat-Shamir, a hash function that is ``sufficiently unstructured\u27\u27 (such as fixed-length SHA-2) should suffice. Finally, for some special interactive protocols, it is known how to (1) isolate a concrete security property of a hash function that suffices to instantiate Fiat-Shamir and (2) build a hash function satisfying this property under a cryptographic assumption such as Learning with Errors. In this work, we abandon this methodology and ask whether Fiat-Shamir truly requires a cryptographic hash function. Perhaps surprisingly, we show that in two of its most common applications --- building signature schemes as well as (general-purpose) non-interactive zero-knowledge arguments --- there are sound Fiat-Shamir instantiations using extremely simple and non-cryptographic hash functions such as sum-mod-p or bit decomposition. In some cases, we make idealized assumptions about the interactive protocol (i.e., we invoke the generic group model), while in others, we argue soundness in the plain model. At a high level, the security of each resulting non-interactive protocol derives from hard problems already implicit in the original interactive protocol. On the other hand, we also identify important cases in which a cryptographic hash function is provably necessary to instantiate Fiat-Shamir. We hope that this work leads to an improved understanding of the precise role of the hash function in the Fiat-Shamir transformation

    Digital Design of New Chaotic Ciphers for Ethernet Traffic

    Get PDF
    Durante los últimos años, ha habido un gran desarrollo en el campo de la criptografía, y muchos algoritmos de encriptado así como otras funciones criptográficas han sido propuestos.Sin embargo, a pesar de este desarrollo, hoy en día todavía existe un gran interés en crear nuevas primitivas criptográficas o mejorar las ya existentes. Algunas de las razones son las siguientes:• Primero, debido el desarrollo de las tecnologías de la comunicación, la cantidad de información que se transmite está constantemente incrementándose. En este contexto, existen numerosas aplicaciones que requieren encriptar una gran cantidad de datos en tiempo real o en un intervalo de tiempo muy reducido. Un ejemplo de ello puede ser el encriptado de videos de alta resolución en tiempo real. Desafortunadamente, la mayoría de los algoritmos de encriptado usados hoy en día no son capaces de encriptar una gran cantidad de datos a alta velocidad mientras mantienen altos estándares de seguridad.• Debido al gran aumento de la potencia de cálculo de los ordenadores, muchos algoritmos que tradicionalmente se consideraban seguros, actualmente pueden ser atacados por métodos de “fuerza bruta” en una cantidad de tiempo razonable. Por ejemplo, cuando el algoritmo de encriptado DES (Data Encryption Standard) fue lanzado por primera vez, el tamaño de la clave era sólo de 56 bits mientras que, hoy en día, el NIST (National Institute of Standards and Technology) recomienda que los algoritmos de encriptado simétricos tengan una clave de, al menos, 112 bits. Por otro lado, actualmente se está investigando y logrando avances significativos en el campo de la computación cuántica y se espera que, en el futuro, se desarrollen ordenadores cuánticos a gran escala. De ser así, se ha demostrado que algunos algoritmos que se usan actualmente como el RSA (Rivest Shamir Adleman) podrían ser atacados con éxito.• Junto al desarrollo en el campo de la criptografía, también ha habido un gran desarrollo en el campo del criptoanálisis. Por tanto, se están encontrando nuevas vulnerabilidades y proponiendo nuevos ataques constantemente. Por consiguiente, es necesario buscar nuevos algoritmos que sean robustos frente a todos los ataques conocidos para sustituir a los algoritmos en los que se han encontrado vulnerabilidades. En este aspecto, cabe destacar que algunos algoritmos como el RSA y ElGamal están basados en la suposición de que algunos problemas como la factorización del producto de dos números primos o el cálculo de logaritmos discretos son difíciles de resolver. Sin embargo, no se ha descartado que, en el futuro, se puedan desarrollar algoritmos que resuelvan estos problemas de manera rápida (en tiempo polinomial).• Idealmente, las claves usadas para encriptar los datos deberían ser generadas de manera aleatoria para ser completamente impredecibles. Dado que las secuencias generadas por generadores pseudoaleatorios, PRNGs (Pseudo Random Number Generators) son predecibles, son potencialmente vulnerables al criptoanálisis. Por tanto, las claves suelen ser generadas usando generadores de números aleatorios verdaderos, TRNGs (True Random Number Generators). Desafortunadamente, los TRNGs normalmente generan los bits a menor velocidad que los PRNGs y, además, las secuencias generadas suelen tener peores propiedades estadísticas, lo que hace necesario que pasen por una etapa de post-procesado. El usar un TRNG de baja calidad para generar claves, puede comprometer la seguridad de todo el sistema de encriptado, como ya ha ocurrido en algunas ocasiones. Por tanto, el diseño de nuevos TRNGs con buenas propiedades estadísticas es un tema de gran interés.En resumen, es claro que existen numerosas líneas de investigación en el ámbito de la criptografía de gran importancia. Dado que el campo de la criptografía es muy amplio, esta tesis se ha centra en tres líneas de investigación: el diseño de nuevos TRNGs, el diseño de nuevos cifradores de flujo caóticos rápidos y seguros y, finalmente, la implementación de nuevos criptosistemas para comunicaciones ópticas Gigabit Ethernet a velocidades de 1 Gbps y 10 Gbps. Dichos criptosistemas han estado basados en los algoritmos caóticos propuestos, pero se han adaptado para poder realizar el encriptado en la capa física, manteniendo el formato de la codificación. De esta forma, se ha logrado que estos sistemas sean capaces no sólo de encriptar los datos sino que, además, un atacante no pueda saber si se está produciendo una comunicación o no. Los principales aspectos cubiertos en esta tesis son los siguientes:• Estudio del estado del arte, incluyendo los algoritmos de encriptado que se usan actualmente. En esta parte se analizan los principales problemas que presentan los algoritmos de encriptado standard actuales y qué soluciones han sido propuestas. Este estudio es necesario para poder diseñar nuevos algoritmos que resuelvan estos problemas.• Propuesta de nuevos TRNGs adecuados para la generación de claves. Se exploran dos diferentes posibilidades: el uso del ruido generado por un acelerómetro MEMS (Microelectromechanical Systems) y el ruido generado por DNOs (Digital Nonlinear Oscillators). Ambos casos se analizan en detalle realizando varios análisis estadísticos a secuencias obtenidas a distintas frecuencias de muestreo. También se propone y se implementa un algoritmo de post-procesado simple para mejorar la aleatoriedad de las secuencias generadas. Finalmente, se discute la posibilidad de usar estos TRNGs como generadores de claves. • Se proponen nuevos algoritmos de encriptado que son rápidos, seguros y que pueden implementarse usando una cantidad reducida de recursos. De entre todas las posibilidades, esta tesis se centra en los sistemas caóticos ya que, gracias a sus propiedades intrínsecas como la ergodicidad o su comportamiento similar al comportamiento aleatorio, pueden ser una buena alternativa a los sistemas de encriptado clásicos. Para superar los problemas que surgen cuando estos sistemas son digitalizados, se proponen y estudian diversas estrategias: usar un sistema de multi-encriptado, cambiar los parámetros de control de los sistemas caóticos y perturbar las órbitas caóticas.• Se implementan los algoritmos propuestos. Para ello, se usa una FPGA Virtex 7. Las distintas implementaciones son analizadas y comparadas, teniendo en cuenta diversos aspectos tales como el consumo de potencia, uso de área, velocidad de encriptado y nivel de seguridad obtenido. Uno de estos diseños, se elige para ser implementado en un ASIC (Application Specific Integrate Circuit) usando una tecnología de 0,18 um. En cualquier caso, las soluciones propuestas pueden ser también implementadas en otras plataformas y otras tecnologías.• Finalmente, los algoritmos propuestos se adaptan y aplican a comunicaciones ópticas Gigabit Ethernet. En particular, se implementan criptosistemas que realizan el encriptado al nivel de la capa física para velocidades de 1 Gbps y 10 Gbps. Para realizar el encriptado en la capa física, los algoritmos propuestos en las secciones anteriores se adaptan para que preserven el formato de la codificación, 8b/10b en el caso de 1 Gb Ethernet y 64b/10b en el caso de 10 Gb Ethernet. En ambos casos, los criptosistemas se implementan en una FPGA Virtex 7 y se diseña un set experimental, que incluye dos módulos SFP (Small Form-factor Pluggable) capaces de transmitir a una velocidad de hasta 10.3125 Gbps sobre una fibra multimodo de 850 nm. Con este set experimental, se comprueba que los sistemas de encriptado funcionan correctamente y de manera síncrona. Además, se comprueba que el encriptado es bueno (pasa todos los test de seguridad) y que el patrón del tráfico de datos está oculto.<br /

    Permuted Puzzles and Cryptographic Hardness

    Get PDF
    A permuted puzzle problem is defined by a pair of distributions D0,D1D_0,D_1 over SnS^n. The problem is to distinguish samples from D0,D1D_0,D_1, where the symbols of each sample are permuted by a single secret permutation pp of [n][n]. The conjectured hardness of specific instances of permuted puzzle problems was recently used to obtain the first candidate constructions of Doubly Efficient Private Information Retrieval (DE-PIR) (Boyle et al. & Canetti et al., TCC\u2717). Roughly, in these works the distributions D0,D1D_0,D_1 over FnF^n are evaluations of either a moderately low-degree polynomial or a random function. This new conjecture seems to be quite powerful, and is the foundation for the first DE-PIR candidates, almost two decades after the question was first posed by Beimel et al. (CRYPTO\u2700). While permuted puzzles are a natural and general class of problems, their hardness is still poorly understood. We initiate a formal investigation of the cryptographic hardness of permuted puzzle problems. Our contributions lie in three main directions: 1. Rigorous formalization. We formalize a notion of permuted puzzle distinguishing problems, extending and generalizing the proposed permuted puzzle framework of Boyle et al. (TCC\u2717). 2. Identifying hard permuted puzzles. We identify natural examples in which a one-time permutation provably creates cryptographic hardness, based on ``standard\u27\u27 assumptions. In these examples, the original distributions D0,D1D_0,D_1 are easily distinguishable, but the permuted puzzle distinguishing problem is computationally hard. We provide such constructions in the random oracle model, and in the plain model under the Decisional Diffie-Hellman (DDH) assumption. We additionally observe that the Learning Parity with Noise (LPN) assumption itself can be cast as a permuted puzzle. 3. Partial lower bound for the DE-PIR problem. We make progress towards better understanding the permuted puzzles underlying the DE-PIR constructions, by showing that a toy version of the problem, introduced by Boyle et al. (TCC\u2717), withstands a rich class of attacks, namely those that distinguish solely via statistical queries

    Shifted Eisenstein polynomials, irreducible compositions of polynomials and group key exchanges

    Full text link
    In my dissertation, I have covered multiple different topics. First, we consider the concept of natural density over the integers, and extend it to holomorphy rings over function fields. This allows us to give a function field analogue of Cesàro’s theorem, which gives the “probability” that an m-tuple of random elements of the holomorphy ring is oprime. We also generalize this and consider the density of k × m matrices over holomorphy rings which can be extended to unimodular m × m matrices. In the second part, we determine the natural density of shifted Eisenstein polynomials. This means that we compute the density of integer polynomials f(x) of a fixed degree n for which some shift f(x + i) for an integer i satisfies Eisenstein’s irreducibility criterion. We then also compute the density of affine Eisenstein polynomials. Thirdly, we consider an arbitrary set of monic quadratic polynomials over a finite field and ask ourselves which compositions of copies of them are irreducible. We first give a criterion to decide whether all such compositions are irreducible, and then show that in general, the irreducible compositions have the structure of a regular language. In the final chapter, we study cryptographic protocols for key exchange in ad-hoc groups. We first translate some protocols from the literature to the more general setting of semigroup actions, and then propose our own variants of these protocols, which aim to have improved security or efficiency. Then, we demonstrate a couple of active attacks on certain such protocols which are in some ways more powerful than man-in-the-middle attacks

    Optimization of Critical Infrastructure with Fluids

    Full text link
    Many of the world's most critical infrastructure systems control the motion of fluids. Despite their importance, the design, operation, and restoration of these infrastructures are sometimes carried out suboptimally. One reason for this is the intractability of optimization problems involving fluids, which are often constrained by partial differential equations or nonconvex physics. To address these challenges, this dissertation focuses on developing new mathematical programming and algorithmic techniques for optimization problems involving difficult nonlinear constraints that model a fluid's behavior. These new contributions bring many important problems within the realm of tractability. The first focus of this dissertation is on surface water systems. Specifically, we introduce the Optimal Flood Mitigation Problem, which optimizes the positioning of structural measures to protect critical assets with respect to a predefined flood scenario. Two solution approaches are then developed. The first leverages mathematical programming but does not tractably scale to realistic scenarios. The second uses a physics-inspired metaheuristic, which is found to compute good quality solutions for realistic scenarios. The second focus is on potable water distribution systems. Two foundational problems are considered. The first is the optimal water network design problem, for which we derive a novel convex reformulation, then develop an algorithm found to be more effective than the current state of the art on select instances. The second is the optimal pump scheduling (or Optimal Water Flow) problem, for which we develop a mathematical programming relaxation and various algorithmic techniques to improve convergence. The final focus is on natural gas pipeline systems. Two novel problems are considered. The first is the Maximal Load Delivery (MLD) problem for gas pipelines, which aims at finding a feasible steady-state operating point that maximizes load delivery for a severely damaged gas network. The second is the joint gas-power MLD problem, which couples damaged gas and power networks at gas-fired generators. In both problems, convex relaxations of nonconvex dynamical constraints are developed to increase tractability.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169849/1/tasseff_1.pd

    Discrete particle swarm optimization for combinatorial problems with innovative applications.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban 2016.Abstract available in PDF file

    Analyse et Conception d'Algorithmes de Chiffrement LĂ©gers

    Get PDF
    The work presented in this thesis has been completed as part of the FUI Paclido project, whose aim is to provide new security protocols and algorithms for the Internet of Things, and more specifically wireless sensor networks. As a result, this thesis investigates so-called lightweight authenticated encryption algorithms, which are designed to fit into the limited resources of constrained environments. The first main contribution focuses on the design of a lightweight cipher called Lilliput-AE, which is based on the extended generalized Feistel network (EGFN) structure and was submitted to the Lightweight Cryptography (LWC) standardization project initiated by NIST (National Institute of Standards and Technology). Another part of the work concerns theoretical attacks against existing solutions, including some candidates of the nist lwc standardization process. Therefore, some specific analyses of the Skinny and Spook algorithms are presented, along with a more general study of boomerang attacks against ciphers following a Feistel construction.Les travaux présentés dans cette thèse s’inscrivent dans le cadre du projet FUI Paclido, qui a pour but de définir de nouveaux protocoles et algorithmes de sécurité pour l’Internet des Objets, et plus particulièrement les réseaux de capteurs sans fil. Cette thèse s’intéresse donc aux algorithmes de chiffrements authentifiés dits à bas coût ou également, légers, pouvant être implémentés sur des systèmes très limités en ressources. Une première partie des contributions porte sur la conception de l’algorithme léger Lilliput-AE, basé sur un schéma de Feistel généralisé étendu (EGFN) et soumis au projet de standardisation international Lightweight Cryptography (LWC) organisé par le NIST (National Institute of Standards and Technology). Une autre partie des travaux se concentre sur des attaques théoriques menées contre des solutions déjà existantes, notamment un certain nombre de candidats à la compétition LWC du NIST. Elle présente donc des analyses spécifiques des algorithmes Skinny et Spook ainsi qu’une étude plus générale des attaques de type boomerang contre les schémas de Feistel

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF

    Classical and quantum algorithms for scaling problems

    Get PDF
    This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
    corecore