249 research outputs found

    Assessing the robustness of genetic codes and genomes

    Full text link
    Deux approches principales existent pour évaluer la robustesse des codes génétiques et des séquences de codage. L'approche statistique est basée sur des estimations empiriques de probabilité calculées à partir d'échantillons aléatoires de permutations représentant les affectations d'acides aminés aux codons, alors que l'approche basée sur l'optimisation repose sur le pourcentage d’optimisation, généralement calculé en utilisant des métaheuristiques. Nous proposons une méthode basée sur les deux premiers moments de la distribution des valeurs de robustesse pour tous les codes génétiques possibles. En se basant sur une instance polynomiale du Problème d'Affectation Quadratique, nous proposons un algorithme vorace exact pour trouver la valeur minimale de la robustesse génomique. Pour réduire le nombre d'opérations de calcul des scores et de la borne supérieure de Cantelli, nous avons développé des méthodes basées sur la structure de voisinage du code génétique et sur la comparaison par paires des codes génétiques, entre autres. Pour calculer la robustesse des codes génétiques naturels et des génomes procaryotes, nous avons choisi 23 codes génétiques naturels, 235 propriétés d'acides aminés, ainsi que 324 procaryotes thermophiles et 418 procaryotes non thermophiles. Parmi nos résultats, nous avons constaté que bien que le code génétique standard soit plus robuste que la plupart des codes génétiques, certains codes génétiques mitochondriaux et nucléaires sont plus robustes que le code standard aux troisièmes et premières positions des codons, respectivement. Nous avons observé que l'utilisation des codons synonymes tend à être fortement optimisée pour amortir l'impact des changements d'une seule base, principalement chez les procaryotes thermophiles.There are two main approaches to assess the robustness of genetic codes and coding sequences. The statistical approach is based on empirical estimates of probabilities computed from random samples of permutations representing assignments of amino acids to codons, whereas, the optimization-based approach relies on the optimization percentage frequently computed by using metaheuristics. We propose a method based on the first two moments of the distribution of robustness values for all possible genetic codes. Based on a polynomially solvable instance of the Quadratic Assignment Problem, we propose also an exact greedy algorithm to find the minimum value of the genome robustness. To reduce the number of operations for computing the scores and Cantelli’s upper bound, we developed methods based on the genetic code neighborhood structure and pairwise comparisons between genetic codes, among others. For assessing the robustness of natural genetic codes and genomes, we have chosen 23 natural genetic codes, 235 amino acid properties, as well as 324 thermophilic and 418 non-thermophilic prokaryotes. Among our results, we found that although the standard genetic code is more robust than most genetic codes, some mitochondrial and nuclear genetic codes are more robust than the standard code at the third and first codon positions, respectively. We also observed that the synonymous codon usage tends to be highly optimized to buffer the impact of single-base changes, mainly, in thermophilic prokaryotes

    Analyse et Conception d'Algorithmes de Chiffrement Légers

    Get PDF
    The work presented in this thesis has been completed as part of the FUI Paclido project, whose aim is to provide new security protocols and algorithms for the Internet of Things, and more specifically wireless sensor networks. As a result, this thesis investigates so-called lightweight authenticated encryption algorithms, which are designed to fit into the limited resources of constrained environments. The first main contribution focuses on the design of a lightweight cipher called Lilliput-AE, which is based on the extended generalized Feistel network (EGFN) structure and was submitted to the Lightweight Cryptography (LWC) standardization project initiated by NIST (National Institute of Standards and Technology). Another part of the work concerns theoretical attacks against existing solutions, including some candidates of the nist lwc standardization process. Therefore, some specific analyses of the Skinny and Spook algorithms are presented, along with a more general study of boomerang attacks against ciphers following a Feistel construction.Les travaux présentés dans cette thèse s’inscrivent dans le cadre du projet FUI Paclido, qui a pour but de définir de nouveaux protocoles et algorithmes de sécurité pour l’Internet des Objets, et plus particulièrement les réseaux de capteurs sans fil. Cette thèse s’intéresse donc aux algorithmes de chiffrements authentifiés dits à bas coût ou également, légers, pouvant être implémentés sur des systèmes très limités en ressources. Une première partie des contributions porte sur la conception de l’algorithme léger Lilliput-AE, basé sur un schéma de Feistel généralisé étendu (EGFN) et soumis au projet de standardisation international Lightweight Cryptography (LWC) organisé par le NIST (National Institute of Standards and Technology). Une autre partie des travaux se concentre sur des attaques théoriques menées contre des solutions déjà existantes, notamment un certain nombre de candidats à la compétition LWC du NIST. Elle présente donc des analyses spécifiques des algorithmes Skinny et Spook ainsi qu’une étude plus générale des attaques de type boomerang contre les schémas de Feistel

    Dynamic system characterization and design using mechanical impedance representations

    Full text link
    Vibration testing is a critical aspect in the qualification of fieldable hardware as dynamic environments are typically design drivers. However, it is difficult to provide representative boundary conditions for component testing and the presence of an ill-matched boundary condition can alter the test outcomes. To achieve more realistic boundary conditions, test fixtures could be strategically designed such that they emulate the impedance of the next level of assembly. The body of work presented herein proposes various strategies for matching the drive point impedance of a target frequency response function (FRF) using undamped lumped parameter emulators. Two primary techniques have been developed to accomplish this impedance matching: a constrained exhaustive search algorithm and a constrained optimization algorithm. The constrained exhaustive search exploits newly identified high and low frequency limits in order to minimize the number of parameters that must be searched. The optimization algorithm provides an innovative methodology for the identification of a comprehensive and bounded design space and presents a novel implementation of particle swarm optimization that produces an optimized set of parameters for every identified physically realizable topology. This resultant emulator design space provides a basis from which low-complexity, low-cost fixtures can be constructed, thus offering an attainable path for better matching of boundary conditions and more representative vibration testing

    Integer Sparse Distributed Memory and Modular Composite Representation

    Get PDF
    Challenging AI applications, such as cognitive architectures, natural language understanding, and visual object recognition share some basic operations including pattern recognition, sequence learning, clustering, and association of related data. Both the representations used and the structure of a system significantly influence which tasks and problems are most readily supported. A memory model and a representation that facilitate these basic tasks would greatly improve the performance of these challenging AI applications.Sparse Distributed Memory (SDM), based on large binary vectors, has several desirable properties: auto-associativity, content addressability, distributed storage, robustness over noisy inputs that would facilitate the implementation of challenging AI applications. Here I introduce two variations on the original SDM, the Extended SDM and the Integer SDM, that significantly improve these desirable properties, as well as a new form of reduced description representation named MCR.Extended SDM, which uses word vectors of larger size than address vectors, enhances its hetero-associativity, improving the storage of sequences of vectors, as well as of other data structures. A novel sequence learning mechanism is introduced, and several experiments demonstrate the capacity and sequence learning capability of this memory.Integer SDM uses modular integer vectors rather than binary vectors, improving the representation capabilities of the memory and its noise robustness. Several experiments show its capacity and noise robustness. Theoretical analyses of its capacity and fidelity are also presented.A reduced description represents a whole hierarchy using a single high-dimensional vector, which can recover individual items and directly be used for complex calculations and procedures, such as making analogies. Furthermore, the hierarchy can be reconstructed from the single vector. Modular Composite Representation (MCR), a new reduced description model for the representation used in challenging AI applications, provides an attractive tradeoff between expressiveness and simplicity of operations. A theoretical analysis of its noise robustness, several experiments, and comparisons with similar models are presented.My implementations of these memories include an object oriented version using a RAM cache, a version for distributed and multi-threading execution, and a GPU version for fast vector processing

    Design, Cryptanalysis and Protection of Symmetric Encryption Algorithms

    Get PDF
    This thesis covers results from several areas related to symmetric cryptography, secure and efficient implementation and is divided into four main parts: In Part II, Benchmarking of AEAD, two articles will be presented, showing the results of the FELICS framework for Authenticated encryption algorithms, and multiarchitecture benchmarking of permutations used as construction block of AEAD algorithms. The Sparkle family of Hash and AEAD algorithms will be shown in Part III. Sparkle is currently a finalist of the NIST call for standardization of lightweight hash and AEAD algorithms. In Part IV, Cryptanalysis of ARX ciphers, it is discussed two cryptanalysis techniques based on differential trails, applied to ARX ciphers. The first technique, called Meet-in-the-Filter uses an offline trail record, combined with a fixed trail and a reverse differential search to propose long differential trails that are useful for key recovery. The second technique is an extension of ARX analyzing tools, that can automate the generation of truncated trails from existing non-truncated ones, and compute the exact probability of those truncated trails. In Part V, Masked AES for Microcontrollers, is shown a new method to efficiently compute a side-channel protected AES, based on the masking scheme described by Rivain and Prouff. This method introduces table and execution-order optimizations, as well as practical security proofs
    corecore