305 research outputs found

    STUDYING QUANTUM HASHING CRYPTOGRAPHIC STRENGTH

    Get PDF
    Abstract.This research solves the problem of studying quantum hashing cryptographic strength. The mostimportant criteria, that should be taken into consideration during cryptographic strength studying, is quantum hashing strength against collisions, and irreversibility of quantum hash-functions. Strength against collisions for selected quantum hash-function depends on many numeric parameters, and it is necessary to find a corresponding optimization solution. It is necessary to conduct comparative analysis of known methods in this research to achieve the goal and offer new methods to deliver the result. In the course of research different algorithms were used and modified to ensure cryptographic strength of quantum hash-functions, and an algorithm on the basis of linear codes is developed to find adecision in case of high dimensionalities.Keywords: Quantum computing, quantum cryptography, quantum hashing

    Inactivation Decoding of LT and Raptor Codes: Analysis and Code Design

    Get PDF
    In this paper we analyze LT and Raptor codes under inactivation decoding. A first order analysis is introduced, which provides the expected number of inactivations for an LT code, as a function of the output distribution, the number of input symbols and the decoding overhead. The analysis is then extended to the calculation of the distribution of the number of inactivations. In both cases, random inactivation is assumed. The developed analytical tools are then exploited to design LT and Raptor codes, enabling a tight control on the decoding complexity vs. failure probability trade-off. The accuracy of the approach is confirmed by numerical simulations.Comment: Accepted for publication in IEEE Transactions on Communication

    Algoritmos de ICA em alfabetos finitos: um estudo comparativo

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2019.Recentemente, algoritmos de Análise de Componentes Independentes (ICA) em alfabetos finitos foram propostos. Tendo em vista cenários não-testados e a replicação de resultados anteriores, desejamos comparar estes algoritmos, bem como verificar como os algoritmos mais generalistas desempenham em relação aos que assumem corpos finitos. Desta maneira, nesta dissertação avaliamos, através de simulações, o desempenho de algoritmos de ICA linear como aplicação ao problema de Separação Cega de Fontes (BSS) em corpos finitos. Duas métricas foram consideradas: tempo de execução e Separação Total das Fontes, uma métrica mais pessimista de separação. Apesar dos algoritmos AMERICA, SA4ICA e GLICA convergirem para 100% de Separação Total ao crescermos a quantidade de amostras observadas, o algoritmo SA4ICA apresenta comportamento que rompe este padrão, o que não foi reportado anteriormente. Adicionalmente, implementamos o algoritmo GLICA. Este último apresentou desempenho de separação praticamente igual em relação ao algoritmo AMERICA, apesar de seu tempo de execução se apresentar superior. Além disso, realizou-se um experimento tendo em vista a aplicação de ICA por si mesma, i.e, a minimização da informação mútua. Os algoritmos lineares previamente aplicados no caso de BSS estão, agora, inseridos em um contexto cuja geração das amostras observadas não se dá por uma mistura linear, o que a princípio não privilegiaria estes algoritmos em relação a um algoritmo não-linear. Porém, ao serem comparados ao algoritmo não-linear QICA, cuja premissa envolve lidar com esses modelos geradores mais genéricos, este mesmo algoritmo detém desempenho inferior na maioria dos cenários em relação aos lineares, que inclusive demonstraram resultados, entre eles mesmos, praticamente iguais em todos os cenários. Ademais, o algoritmo QICA toma muito tempo para ser executado em relação à todos os outros algoritmos. Desta forma, os resultados sugerem que os algoritmos lineares detiveram vantagem tanto temporal quanto em desempenho em relação a este novo algoritmo. Para isto, contamos com o uso de inferência estatística em ambos os experimentos para validação de nossos resultados.In recent years, Independent Component Analysis (ICA) algorithms over finite alphabets, as well as the particular case of these: finite fields, have been proposed. Given the untested scenarios and the replication of previous results, we want to compare these algorithms with each other, as well as verify how the more generalist algorithms perform compared to those that assume a finite field structure. Thus, in this dissertation we evaluated, through stochastic simulations, the performance of linear ICA algorithms as an application to the Blind Source Separation (BSS) problem over finite fields. Two metrics were considered: execution time and Total Source Separation, a more pessimistic separation metric. Although the AMERICA, SA4ICA and GLICA algorithms converge at 100% Total Separationas we grow the number of samples observed, the SA4ICA algorithm has anomalous behavior that breaks this pattern, which was not previously reported. Additionally, we implemented the GLICA algorithm. The latter presented a practically equal separation performance in relation to the AMERICA algorithm, although its execution time is superior to this more consolidated technique. In addition, we conducted an experiment to apply ICA by itself, that is, to minimize mutual information. The linear algorithms previously applied in the case of BSS are now inserted in a context whose generation of the observed samples is not by a linear mixture, which in principle would not privilege these algorithms over a nonlinear algorithm. However, when compared to the nonlinear QICA algorithm, whose premise involves dealing with these more generic generator models, this same algorithm has lower performance in most scenarios than the linear ones, which even showed results, among themselves, practically the same in all scenarios. Moreover, the QICA algorithm takes a long time to execute relative to all other linear algorithms. Thus, the results suggest that linear algorithms had both temporal and performance advantage over this new algorithm. For this, we rely on the use of statistical inference in both experiments to validate our results

    Optimal Networks from Error Correcting Codes

    Full text link
    To address growth challenges facing large Data Centers and supercomputing clusters a new construction is presented for scalable, high throughput, low latency networks. The resulting networks require 1.5-5 times fewer switches, 2-6 times fewer cables, have 1.2-2 times lower latency and correspondingly lower congestion and packet losses than the best present or proposed networks providing the same number of ports at the same total bisection. These advantage ratios increase with network size. The key new ingredient is the exact equivalence discovered between the problem of maximizing network bisection for large classes of practically interesting Cayley graphs and the problem of maximizing codeword distance for linear error correcting codes. Resulting translation recipe converts existent optimal error correcting codes into optimal throughput networks.Comment: 14 pages, accepted at ANCS 2013 conferenc

    Sampling Techniques for Boolean Satisfiability

    Full text link
    Boolean satisfiability ({\SAT}) has played a key role in diverse areas spanning testing, formal verification, planning, optimization, inferencing and the like. Apart from the classical problem of checking boolean satisfiability, the problems of generating satisfying uniformly at random, and of counting the total number of satisfying assignments have also attracted significant theoretical and practical interest over the years. Prior work offered heuristic approaches with very weak or no guarantee of performance, and theoretical approaches with proven guarantees, but poor performance in practice. We propose a novel approach based on limited-independence hashing that allows us to design algorithms for both problems, with strong theoretical guarantees and scalability extending to thousands of variables. Based on this approach, we present two practical algorithms, {\UniformWitness}: a near uniform generator and {\approxMC}: the first scalable approximate model counter, along with reference implementations. Our algorithms work by issuing polynomial calls to {\SAT} solver. We demonstrate scalability of our algorithms over a large set of benchmarks arising from different application domains.Comment: MS Thesis submitted to Rice Universit

    Fountain Codes under Maximum Likelihood Decoding

    Get PDF
    This dissertation focuses on fountain codes under maximum likelihood (ML) decoding. First LT codes are considered under a practical and widely used ML decoding algorithm known as inactivation decoding. Different analysis techniques are presented to characterize the decoding complexity. Next an upper bound to the probability of decoding failure of Raptor codes under ML decoding is provided. Then, the distance properties of an ensemble of fixed-rate Raptor codes with linear random outer codes are analyzed. Finally, a novel class of fountain codes is presented, which consists of a parallel concatenation of a block code with a linear random fountain code.Comment: PhD Thesi

    Cybersecurity and Quantum Computing: friends or foes?

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    A review of clustering techniques and developments

    Full text link
    © 2017 Elsevier B.V. This paper presents a comprehensive study on clustering: exiting methods and developments made at various times. Clustering is defined as an unsupervised learning where the objects are grouped on the basis of some similarity inherent among them. There are different methods for clustering the objects such as hierarchical, partitional, grid, density based and model based. The approaches used in these methods are discussed with their respective states of art and applicability. The measures of similarity as well as the evaluation criteria, which are the central components of clustering, are also presented in the paper. The applications of clustering in some fields like image segmentation, object and character recognition and data mining are highlighted
    corecore