762 research outputs found

    Security Enhanced Symmetric Key Encryption Employing an Integer Code for the Erasure Channel

    Get PDF
    An instance of the framework for cryptographic security enhancement of symmetric-key encryption employing a dedicated error correction encoding is addressed. The main components of the proposal are: (i) a dedicated error correction coding and (ii) the use of a dedicated simulator of the noisy channel. The proposed error correction coding is designed for the binary erasure channel where at most one bit is erased in each codeword byte. The proposed encryption has been evaluated in the traditional scenario where we consider the advantage of an attacker to correctly decide to which of two known messages the given ciphertext corresponds. The evaluation shows that the proposed encryption provides a reduction of the considered attacker’s advantage in comparison with the initial encryption setting. The implementation complexity of the proposed encryption is considered, and it implies a suitable trade-off between increased security and increased implementation complexity

    Universal lossless source coding with the Burrows Wheeler transform

    Get PDF
    The Burrows Wheeler transform (1994) is a reversible sequence transformation used in a variety of practical lossless source-coding algorithms. In each, the BWT is followed by a lossless source code that attempts to exploit the natural ordering of the BWT coefficients. BWT-based compression schemes are widely touted as low-complexity algorithms giving lossless coding rates better than those of the Ziv-Lempel codes (commonly known as LZ'77 and LZ'78) and almost as good as those achieved by prediction by partial matching (PPM) algorithms. To date, the coding performance claims have been made primarily on the basis of experimental results. This work gives a theoretical evaluation of BWT-based coding. The main results of this theoretical evaluation include: (1) statistical characterizations of the BWT output on both finite strings and sequences of length n → ∞, (2) a variety of very simple new techniques for BWT-based lossless source coding, and (3) proofs of the universality and bounds on the rates of convergence of both new and existing BWT-based codes for finite-memory and stationary ergodic sources. The end result is a theoretical justification and validation of the experimentally derived conclusions: BWT-based lossless source codes achieve universal lossless coding performance that converges to the optimal coding performance more quickly than the rate of convergence observed in Ziv-Lempel style codes and, for some BWT-based codes, within a constant factor of the optimal rate of convergence for finite-memory source

    Two-Dimensional Source Coding by Means of Subblock Enumeration

    Full text link
    A technique of lossless compression via substring enumeration (CSE) attains compression ratios as well as popular lossless compressors for one-dimensional (1D) sources. The CSE utilizes a probabilistic model built from the circular string of an input source for encoding the source.The CSE is applicable to two-dimensional (2D) sources such as images by dealing with a line of pixels of 2D source as a symbol of an extended alphabet. At the initial step of the CSE encoding process, we need to output the number of occurrences of all symbols of the extended alphabet, so that the time complexity increase exponentially when the size of source becomes large. To reduce the time complexity, we propose a new CSE which can encode a 2D source in block-by-block instead of line-by-line. The proposed CSE utilizes the flat torus of an input 2D source as a probabilistic model for encoding the source instead of the circular string of the source. Moreover, we analyze the limit of the average codeword length of the proposed CSE for general sources.Comment: 5 pages, Submitted to ISIT201

    Description of Complex Systems in terms of Self-Organization Processes of Prime Integer Relations

    Full text link
    In the paper we present a description of complex systems in terms of self-organization processes of prime integer relations. A prime integer relation is an indivisible element made up of integers as the basic constituents following a single organizing principle. The prime integer relations control correlation structures of complex systems and may describe complex systems in a strong scale covariant form. It is possible to geometrize the prime integer relations as two-dimensional patterns and isomorphically express the self-organization processes through transformations of the geometric patterns. As a result, prime integer relations can be measured by corresponding geometric patterns specifying the dynamics of complex systems. Determined by arithmetic only, the self-organization processes of prime integer relations can describe complex systems by information not requiring further explanations. This gives the possibility to develop an irreducible theory of complex systems.Comment: 8 pages, 4 figures, index corrected, minor changes mainly of stylistic characte

    On an Irreducible Theory of Complex Systems

    Full text link
    In the paper we present results to develop an irreducible theory of complex systems in terms of self-organization processes of prime integer relations. Based on the integers and controlled by arithmetic only the self-organization processes can describe complex systems by information not requiring further explanations. Important properties of the description are revealed. It points to a special type of correlations that do not depend on the distances between parts, local times and physical signals and thus proposes a perspective on quantum entanglement. Through a concept of structural complexity the description also computationally suggests the possibility of a general optimality condition of complex systems. The computational experiments indicate that the performance of a complex system may behave as a concave function of the structural complexity. A connection between the optimality condition and the majorization principle in quantum algorithms is identified. A global symmetry of complex systems belonging to the system as a whole, but not necessarily applying to its embedded parts is presented. As arithmetic fully determines the breaking of the global symmetry, there is no further need to explain why the resulting gauge forces exist the way they do and not even slightly different.Comment: 8 pages, 3 figures, typos are corrected, some changes and additions are mad

    Power Laws, Highly Optimized Tolerance, and Generalized Source Coding

    Get PDF
    We introduce a family of robust design problems for complex systems in uncertain environments which are based on tradeoffs between resource allocations and losses. Optimized solutions yield the “robust, yet fragile” features of highly optimized tolerance and exhibit power law tails in the distributions of events for all but the special case of Shannon coding for data compression. In addition to data compression, we construct specific solutions for world wide web traffic and forest fires, and obtain excellent agreement with measured data
    • 

    corecore