488 research outputs found

    Discovering the roots: Uniform closure results for algebraic classes under factoring

    Full text link
    Newton iteration (NI) is an almost 350 years old recursive formula that approximates a simple root of a polynomial quite rapidly. We generalize it to a matrix recurrence (allRootsNI) that approximates all the roots simultaneously. In this form, the process yields a better circuit complexity in the case when the number of roots rr is small but the multiplicities are exponentially large. Our method sets up a linear system in rr unknowns and iteratively builds the roots as formal power series. For an algebraic circuit f(x1,,xn)f(x_1,\ldots,x_n) of size ss we prove that each factor has size at most a polynomial in: ss and the degree of the squarefree part of ff. Consequently, if f1f_1 is a 2Ω(n)2^{\Omega(n)}-hard polynomial then any nonzero multiple ifiei\prod_{i} f_i^{e_i} is equally hard for arbitrary positive eie_i's, assuming that ideg(fi)\sum_i \text{deg}(f_i) is at most 2O(n)2^{O(n)}. It is an old open question whether the class of poly(nn)-sized formulas (resp. algebraic branching programs) is closed under factoring. We show that given a polynomial ff of degree nO(1)n^{O(1)} and formula (resp. ABP) size nO(logn)n^{O(\log n)} we can find a similar size formula (resp. ABP) factor in randomized poly(nlognn^{\log n})-time. Consequently, if determinant requires nΩ(logn)n^{\Omega(\log n)} size formula, then the same can be said about any of its nonzero multiples. As part of our proofs, we identify a new property of multivariate polynomial factorization. We show that under a random linear transformation τ\tau, f(τx)f(\tau\overline{x}) completely factors via power series roots. Moreover, the factorization adapts well to circuit complexity analysis. This with allRootsNI are the techniques that help us make progress towards the old open problems, supplementing the large body of classical results and concepts in algebraic circuit factorization (eg. Zassenhaus, J.NT 1969, Kaltofen, STOC 1985-7 \& Burgisser, FOCS 2001).Comment: 33 Pages, No figure

    Hedegård - a rich village and cemetery complex of the Early Iron Age on the Skjern river: An interim report

    Get PDF
    Hedegård - a rich village and cemetery complex of the Early Iron Age on the Skjern river: An interim repor

    Bison-mediated seed dispersal in a tallgrass prairie reconstruction

    Get PDF
    Bison-mediated seed dispersal may be a critical ecological process that has been eliminated in grassland ecosystems by the removal of this keystone species. In this study of epizoochory and endozoochory by bison, we installed funnel seed traps on 50, 50-m transects on the Neal Smith National Wildlife Refuge in south central Iowa, to compare the composition and density of seed species dispersed by bison with the abiotic seed rain in a tallgrass prairie reconstruction. Seed trap, dung, and shed hair samples were collected monthly from April 2011 through November 2013. Hair samples were clipped directly from bison at the end of the plant growing season during annual November round-ups. Seeds were identified and classified as native or non-native, by plant functional group, and by diaspore characteristics. A diverse mix of epizoochorus seeds, wind dispersed propagules, and seeds with smooth, rounded diaspores in bison dung, shed hair, and attached to the animals, suggests that bison are generalist dispersers of both forbs and graminoids. Bison dung contained seeds in similar proportions to those collected in seed traps. Shed bison hair contained a significantly greater proportion of both native species and grass seeds than were found in bison dung or seed trap samples. Seed compositions in shed hair and dung appeared to be influenced by the phenology of seed production, the foraging behavior of bison, and the movements of bison through a variety of vegetation types. Bison are the dominant grazers in many large public and private grasslands of western North America. Conservation herds are growing and are being reintroduced to both newly reconstructed prairies and remnant prairies that have been without this keystone species for over a century. Our study provides needed information concerning the potential for bison to act as seed dispersal agents in these often fragmented ecosystems

    Graph-based techniques for compression and reconstruction of sparse sources

    Get PDF
    The main goal of this thesis is to develop lossless compression schemes for analog and binary sources. All the considered compression schemes have as common feature that the encoder can be represented by a graph, so they can be studied employing tools from modern coding theory. In particular, this thesis is focused on two compression problems: the group testing and the noiseless compressed sensing problems. Although both problems may seem unrelated, in the thesis they are shown to be very close. Furthermore, group testing has the same mathematical formulation as non-linear binary source compression schemes that use the OR operator. In this thesis, the similarities between these problems are exploited. The group testing problem is aimed at identifying the defective subjects of a population with as few tests as possible. Group testing schemes can be divided into two groups: adaptive and non-adaptive group testing schemes. The former schemes generate tests sequentially and exploit the partial decoding results to attempt to reduce the overall number of tests required to label all members of the population, whereas non-adaptive schemes perform all the test in parallel and attempt to label as many subjects as possible. Our contributions to the group testing problem are both theoretical and practical. We propose a novel adaptive scheme aimed to efficiently perform the testing process. Furthermore, we develop tools to predict the performance of both adaptive and non-adaptive schemes when the number of subjects to be tested is large. These tools allow to characterize the performance of adaptive and non-adaptive group testing schemes without simulating them. The goal of the noiseless compressed sensing problem is to retrieve a signal from its lineal projection version in a lower-dimensional space. This can be done only whenever the amount of null components of the original signal is large enough. Compressed sensing deals with the design of sampling schemes and reconstruction algorithms that manage to reconstruct the original signal vector with as few samples as possible. In this thesis we pose the compressed sensing problem within a probabilistic framework, as opposed to the classical compression sensing formulation. Recent results in the state of the art show that this approach is more efficient than the classical one. Our contributions to noiseless compressed sensing are both theoretical and practical. We deduce a necessary and sufficient matrix design condition to guarantee that the reconstruction is lossless. Regarding the design of practical schemes, we propose two novel reconstruction algorithms based on message passing over the sparse representation of the matrix, one of them with very low computational complexity.El objetivo principal de la tesis es el desarrollo de esquemas de compresión sin pérdidas para fuentes analógicas y binarias. Los esquemas analizados tienen en común la representación del compresor mediante un grafo; esto ha permitido emplear en su estudio las herramientas de codificación modernas. Más concretamente la tesis estudia dos problemas de compresión en particular: el diseño de experimentos de testeo comprimido de poblaciones (de sangre, de presencia de elementos contaminantes, secuenciado de ADN, etcétera) y el muestreo comprimido de señales reales en ausencia de ruido. A pesar de que a primera vista parezcan problemas totalmente diferentes, en la tesis mostramos que están muy relacionados. Adicionalmente, el problema de testeo comprimido de poblaciones tiene una formulación matemática idéntica a los códigos de compresión binarios no lineales basados en puertas OR. En la tesis se explotan las similitudes entre todos estos problemas. Existen dos aproximaciones al testeo de poblaciones: el testeo adaptativo y el no adaptativo. El primero realiza los test de forma secuencial y explota los resultados parciales de estos para intentar reducir el número total de test necesarios, mientras que el segundo hace todos los test en bloque e intenta extraer el máximo de datos posibles de los test. Nuestras contribuciones al problema de testeo comprimido han sido tanto teóricas como prácticas. Hemos propuesto un nuevo esquema adaptativo para realizar eficientemente el proceso de testeo. Además hemos desarrollado herramientas que permiten predecir el comportamiento tanto de los esquemas adaptativos como de los esquemas no adaptativos cuando el número de sujetos a testear es elevado. Estas herramientas permiten anticipar las prestaciones de los esquemas de testeo sin necesidad de simularlos. El objetivo del muestreo comprimido es recuperar una señal a partir de su proyección lineal en un espacio de menor dimensión. Esto sólo es posible si se asume que la señal original tiene muchas componentes que son cero. El problema versa sobre el diseño de matrices y algoritmos de reconstrucción que permitan implementar esquemas de muestreo y reconstrucción con un número mínimo de muestras. A diferencia de la formulación clásica de muestreo comprimido, en esta tesis se ha empleado un modelado probabilístico de la señal. Referencias recientes en la literatura demuestran que este enfoque permite conseguir esquemas de compresión y descompresión más eficientes. Nuestras contribuciones en el campo de muestreo comprimido de fuentes analógicas dispersas han sido también teóricas y prácticas. Por un lado, la deducción de la condición necesaria y suficiente que debe garantizar la matriz de muestreo para garantizar que se puede reconstruir unívocamente la secuencia de fuente. Por otro lado, hemos propuesto dos algoritmos, uno de ellos de baja complejidad computacional, que permiten reconstruir la señal original basados en paso de mensajes entre los nodos de la representación gráfica de la matriz de proyección.Postprint (published version

    Functional Immune Anatomy of the Liver - as an allograft

    Get PDF

    A petroleum source rock study of a Kimmeridgian section in Dorsetshire, England

    Get PDF
    Imperial Users onl

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results
    corecore