10 research outputs found

    Linear Equivalence of Block Ciphers with Partial Non-Linear Layers: Application to LowMC

    Get PDF
    LowMC is a block cipher family designed in 2015 by Albrecht et al. It is optimized for practical instantiations of multi-party computation, fully homomorphic encryption, and zero-knowledge proofs. LowMC is used in the Picnic signature scheme, submitted to NIST\u27s post-quantum standardization project and is a substantial building block in other novel post-quantum cryptosystems. Many LowMC instances use a relatively recent design strategy (initiated by Gérard et al. at CHES 2013) of applying the non-linear layer to only a part of the state in each round, where the shortage of non-linear operations is partially compensated by heavy linear algebra. Since the high linear algebra complexity has been a bottleneck in several applications, one of the open questions raised by the designers was to reduce it, without introducing additional non-linear operations (or compromising security). In this paper, we consider LowMC instances with block size nn, partial non-linear layers of size s≤ns \leq n and rr encryption rounds. We redesign LowMC\u27s linear components in a way that preserves its specification, yet improves LowMC\u27s performance in essentially every aspect. Most of our optimizations are applicable to all SP-networks with partial non-linear layers and shed new light on this relatively new design methodology. Our main result shows that when s<ns < n, each LowMC instance belongs to a large class of equivalent instances that differ in their linear layers. We then select a representative instance from this class for which encryption (and decryption) can be implemented much more efficiently than for an arbitrary instance. This yields a new encryption algorithm that is equivalent to the standard one, but reduces the evaluation time and storage of the linear layers from r⋅n2r \cdot n^2 bits to about r⋅n2−(r−1)(n−s)2r \cdot n^2 - (r-1)(n-s)^2. Additionally, we reduce the size of LowMC\u27s round keys and constants and optimize its key schedule and instance generation algorithms. All of these optimizations give substantial improvements for small ss and a reasonable choice of rr. Finally, we formalize the notion of linear equivalence of block ciphers and prove the optimality of some of our results. Comprehensive benchmarking of our optimizations in various LowMC applications (such as Picnic) reveals improvements by factors that typically range between 22x and 4040x in runtime and memory consumption

    Towards the fast scrambling conjecture

    Get PDF
    Many proposed quantum mechanical models of black holes include highly nonlocal interactions. The time required for thermalization to occur in such models should reflect the relaxation times associated with classical black holes in general relativity. Moreover, the time required for a particularly strong form of thermalization to occur, sometimes known as scrambling, determines the time scale on which black holes should start to release information. It has been conjectured that black holes scramble in a time logarithmic in their entropy, and that no system in nature can scramble faster. In this article, we address the conjecture from two directions. First, we exhibit two examples of systems that do indeed scramble in logarithmic time: Brownian quantum circuits and the antiferromagnetic Ising model on a sparse random graph. Unfortunately, both fail to be truly ideal fast scramblers for reasons we discuss. Second, we use Lieb-Robinson techniques to prove a logarithmic lower bound on the scrambling time of systems with finite norm terms in their Hamiltonian. The bound holds in spite of any nonlocal structure in the Hamiltonian, which might permit every degree of freedom to interact directly with every other one.Comment: 34 pages. v2: typo correcte

    A review of elliptical and disc galaxy structure, and modern scaling laws

    Full text link
    A century ago, in 1911 and 1913, Plummer and then Reynolds introduced their models to describe the radial distribution of stars in `nebulae'. This article reviews the progress since then, providing both an historical perspective and a contemporary review of the stellar structure of bulges, discs and elliptical galaxies. The quantification of galaxy nuclei, such as central mass deficits and excess nuclear light, plus the structure of dark matter halos and cD galaxy envelopes, are discussed. Issues pertaining to spiral galaxies including dust, bulge-to-disc ratios, bulgeless galaxies, bars and the identification of pseudobulges are also reviewed. An array of modern scaling relations involving sizes, luminosities, surface brightnesses and stellar concentrations are presented, many of which are shown to be curved. These 'redshift zero' relations not only quantify the behavior and nature of galaxies in the Universe today, but are the modern benchmark for evolutionary studies of galaxies, whether based on observations, N-body-simulations or semi-analytical modelling. For example, it is shown that some of the recently discovered compact elliptical galaxies at 1.5 < z < 2.5 may be the bulges of modern disc galaxies.Comment: Condensed version (due to Contract) of an invited review article to appear in "Planets, Stars and Stellar Systems"(www.springer.com/astronomy/book/978-90-481-8818-5). 500+ references incl. many somewhat forgotten, pioneer papers. Original submission to Springer: 07-June-201

    Invariance principles for non-uniform random mappings and trees

    No full text
    In the context of uniform random mappings of an n-element set to itself, Aldous and Pitman (1994) established a functional invariance principle, showing that many n!1 limit distributions can be described as distributions of suitable functions of reflecting Brownian bridge. To study non-uniform cases, in this paper we formulate a sampling invariance principle in terms of iterates of a fixed number of random elements. We show that the sampling invariance principle implies many, but not all, of the distributional limits implied by the functional invariance principle. We give direct verifications of the sampling invariance principle in two successive generalizations of the uniform case, to p-mappings (where elements are mapped to i.i.d. non-uniform elements) and P-mappings (where elements are mapped according to a Markov matrix). We compare with parallel results in the simpler setting of random trees

    Branching Processes and Their Applications in the Analysis of Tree Structures and Tree Algorithms

    No full text
    corecore