488 research outputs found
Discovering the roots: Uniform closure results for algebraic classes under factoring
Newton iteration (NI) is an almost 350 years old recursive formula that
approximates a simple root of a polynomial quite rapidly. We generalize it to a
matrix recurrence (allRootsNI) that approximates all the roots simultaneously.
In this form, the process yields a better circuit complexity in the case when
the number of roots is small but the multiplicities are exponentially
large. Our method sets up a linear system in unknowns and iteratively
builds the roots as formal power series. For an algebraic circuit
of size we prove that each factor has size at most a
polynomial in: and the degree of the squarefree part of . Consequently,
if is a -hard polynomial then any nonzero multiple
is equally hard for arbitrary positive 's, assuming
that is at most .
It is an old open question whether the class of poly()-sized formulas
(resp. algebraic branching programs) is closed under factoring. We show that
given a polynomial of degree and formula (resp. ABP) size
we can find a similar size formula (resp. ABP) factor in
randomized poly()-time. Consequently, if determinant requires
size formula, then the same can be said about any of its
nonzero multiples.
As part of our proofs, we identify a new property of multivariate polynomial
factorization. We show that under a random linear transformation ,
completely factors via power series roots. Moreover, the
factorization adapts well to circuit complexity analysis. This with allRootsNI
are the techniques that help us make progress towards the old open problems,
supplementing the large body of classical results and concepts in algebraic
circuit factorization (eg. Zassenhaus, J.NT 1969, Kaltofen, STOC 1985-7 \&
Burgisser, FOCS 2001).Comment: 33 Pages, No figure
Hedegård - a rich village and cemetery complex of the Early Iron Age on the Skjern river: An interim report
Hedegård - a rich village and cemetery complex of the Early Iron Age on the Skjern river: An interim repor
Bison-mediated seed dispersal in a tallgrass prairie reconstruction
Bison-mediated seed dispersal may be a critical ecological process that has been eliminated in grassland ecosystems by the removal of this keystone species. In this study of epizoochory and endozoochory by bison, we installed funnel seed traps on 50, 50-m transects on the Neal Smith National Wildlife Refuge in south central Iowa, to compare the composition and density of seed species dispersed by bison with the abiotic seed rain in a tallgrass prairie reconstruction. Seed trap, dung, and shed hair samples were collected monthly from April 2011 through November 2013. Hair samples were clipped directly from bison at the end of the plant growing season during annual November round-ups. Seeds were identified and classified as native or non-native, by plant functional group, and by diaspore characteristics. A diverse mix of epizoochorus seeds, wind dispersed propagules, and seeds with smooth, rounded diaspores in bison dung, shed hair, and attached to the animals, suggests that bison are generalist dispersers of both forbs and graminoids. Bison dung contained seeds in similar proportions to those collected in seed traps. Shed bison hair contained a significantly greater
proportion of both native species and grass seeds than were found in bison dung or seed trap samples. Seed compositions in shed hair and dung appeared to be influenced by the phenology of seed production, the foraging behavior of bison, and the movements of bison through a variety of vegetation types. Bison are the dominant grazers in many large public and private grasslands of western North America. Conservation herds are growing and are being reintroduced to both newly reconstructed prairies and remnant prairies that have been without this keystone species for over a century. Our study provides needed information concerning the potential for bison to act as seed dispersal agents in these often fragmented ecosystems
Graph-based techniques for compression and reconstruction of sparse sources
The main goal of this thesis is to develop lossless compression schemes for analog and binary sources. All the considered compression schemes have as common feature that the encoder can be represented by a graph, so they can be studied employing tools from modern coding theory.
In particular, this thesis is focused on two compression problems: the group testing and the noiseless compressed sensing problems. Although both problems may seem unrelated, in the thesis they are shown to be very close. Furthermore, group testing has the same mathematical formulation as non-linear binary source compression schemes that use the OR operator. In this thesis, the similarities between these problems are exploited.
The group testing problem is aimed at identifying the defective subjects of a population with as few tests as possible. Group testing schemes can be divided into two groups: adaptive and non-adaptive group testing schemes. The former schemes generate tests sequentially and exploit the partial decoding results to attempt to reduce the overall number of tests required to label all members of the population, whereas non-adaptive schemes perform all the test in parallel and attempt to label as many subjects as possible.
Our contributions to the group testing problem are both theoretical and practical. We propose a novel adaptive scheme aimed to efficiently perform the testing process. Furthermore, we develop tools to predict the performance of both adaptive and non-adaptive schemes when the number of subjects to be tested is large. These tools allow to characterize the performance of adaptive and non-adaptive group testing schemes without simulating them.
The goal of the noiseless compressed sensing problem is to retrieve a signal from its lineal projection version in a lower-dimensional space. This can be done only whenever the amount of null components of the original signal is large enough. Compressed sensing deals with the design of sampling schemes and reconstruction algorithms that manage to reconstruct the original signal vector with as few samples as possible.
In this thesis we pose the compressed sensing problem within a probabilistic framework, as opposed to the classical compression sensing formulation. Recent results in the state of the art show that this approach is more efficient than the classical one.
Our contributions to noiseless compressed sensing are both theoretical and practical. We deduce a necessary and sufficient matrix design condition to guarantee that the reconstruction is lossless. Regarding the design of practical schemes, we propose two novel reconstruction algorithms based on message passing over the sparse representation of the matrix, one of them with very low computational complexity.El objetivo principal de la tesis es el desarrollo de esquemas de compresión sin pérdidas para fuentes analógicas y binarias. Los esquemas analizados tienen en común la representación del compresor mediante un grafo; esto ha permitido emplear en su estudio las herramientas de codificación modernas. Más concretamente la tesis estudia dos problemas de compresión en particular: el diseño de experimentos de testeo comprimido de poblaciones (de sangre, de presencia de elementos contaminantes, secuenciado de ADN, etcétera) y el muestreo comprimido de señales reales en ausencia de ruido. A pesar de que a primera vista parezcan problemas totalmente diferentes, en la tesis mostramos que están muy relacionados. Adicionalmente, el problema de testeo comprimido de poblaciones tiene una formulación matemática idéntica a los códigos de compresión binarios no lineales basados en puertas OR. En la tesis se explotan las similitudes entre todos estos problemas. Existen dos aproximaciones al testeo de poblaciones: el testeo adaptativo y el no adaptativo. El primero realiza los test de forma secuencial y explota los resultados parciales de estos para intentar reducir el número total de test necesarios, mientras que el segundo hace todos los test en bloque e intenta extraer el máximo de datos posibles de los test. Nuestras contribuciones al problema de testeo comprimido han sido tanto teóricas como prácticas. Hemos propuesto un nuevo esquema adaptativo para realizar eficientemente el proceso de testeo. Además hemos desarrollado herramientas que permiten predecir el comportamiento tanto de los esquemas adaptativos como de los esquemas no adaptativos cuando el número de sujetos a testear es elevado. Estas herramientas permiten anticipar las prestaciones de los esquemas de testeo sin necesidad de simularlos. El objetivo del muestreo comprimido es recuperar una señal a partir de su proyección lineal en un espacio de menor dimensión. Esto sólo es posible si se asume que la señal original tiene muchas componentes que son cero. El problema versa sobre el diseño de matrices y algoritmos de reconstrucción que permitan implementar esquemas de muestreo y reconstrucción con un número mínimo de muestras. A diferencia de la formulación clásica de muestreo comprimido, en esta tesis se ha empleado un modelado probabilístico de la señal. Referencias recientes en la literatura demuestran que este enfoque permite conseguir esquemas de compresión y descompresión más eficientes. Nuestras contribuciones en el campo de muestreo comprimido de fuentes analógicas dispersas han sido también teóricas y prácticas. Por un lado, la deducción de la condición necesaria y suficiente que debe garantizar la matriz de muestreo para garantizar que se puede reconstruir unívocamente la secuencia de fuente. Por otro lado, hemos propuesto dos algoritmos, uno de ellos de baja complejidad computacional, que permiten reconstruir la señal original basados en paso de mensajes entre los nodos de la representación gráfica de la matriz de proyección.Postprint (published version
Recommended from our members
The effects of Morris water maze learning on the number, morphology and molecular composition of rat hippocampal dentate gyrus synapses
spatial long-term memory formation is dependent upon the hippocampus and associated brain structures in mammals. Memory storage is believed to involve changes in the way information is exchanged between neurons, and this is principally governed by their synaptic connections. Changes can occur in the functional properties of individual synapses, but evidence suggests that morphological changes may also occur. Research described in this thesis has used the Morris water maze, a behavioural paradigm that requires rodents to form long-term memories about a spatial environment, and this learning task involves the function of the hippocampus. Electron microscopy was used to investigate the ultrastructural morphology and composition of synapses in the hippocampal dentate gyrus in several groups of animals. Three time- points were investigated, 3, 9 and 24 hours after the start of training, which also corresponded to small, intermediate and large amounts of training, as well as two different types of control, naïve and swim-only. Animals investigated 3 hours after the start of training did not show significant long term memory for the task, whereas animals investigated 9 and 24 hours after the start of learning displayed long-term memory recall when measured by the quadrant analysis test (probe trial). Hippocampal dimensions and dentate granule cell densities were similar between all animal groups. No significant changes to synaptic ultrastructural morphology were evident in the 3 hour group. In the 9 hour group, significant increases in synapse density and synapse to neuron ratio were observed, with a simultaneous decrease in the synapse mean height and average area of PSD (post-synaptie density) per synapse. No significant changes were observed in the exercise-matched swim-only controls, suggesting that the changes were related to long-term memory formation. Morphological changes were not evident in the 24 hour group, despite long term memory recall, suggesting that the morphological changes following spatial learning in the Morris water maze are transient. The total amount of synaptic membrane was not significantly different between any of the groups, suggesting that although new, smaller synapses may be formed as a result of learning, changes also occur to existing synapses, which may result in their re-categorisation or even removal. Analysis of ionotropic glutamate receptors following training proved inconclusive, particularly for NMDA receptors, but did suggest that AMP A receptors are increased in the initial stages of learning, which may be a mechanism of short-term memory storage
Decryption Failure Attacks on Post-Quantum Cryptography
This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results
- …