44 research outputs found

    A Combinatorial Commutative Algebra Approach to Complete Decoding

    Get PDF
    Esta tesis pretende explorar el nexo de unión que existe entre la estructura algebraica de un código lineal y el proceso de descodificación completa. Sabemos que el proceso de descodificación completa para códigos lineales arbitrarios es NP-completo, incluso si se admite preprocesamiento de los datos. Nuestro objetivo es realizar un análisis algebraico del proceso de la descodificación, para ello asociamos diferentes estructuras matemáticas a ciertas familias de códigos. Desde el punto de vista computacional, nuestra descripción no proporciona un algoritmo eficiente pues nos enfrentamos a un problema de naturaleza NP. Sin embargo, proponemos algoritmos alternativos y nuevas técnicas que permiten relajar las condiciones del problema reduciendo los recursos de espacio y tiempo necesarios para manejar dicha estructura algebraica.Departamento de Algebra, Geometría y Topologí

    On the hardness of the hidden subspaces problem with and without noise. Cryptanalysis of Aaronson-Christiano’s quantum money scheme

    Get PDF
    [ES] El boom de internet ha marcado el comienzo de la era digital y ésta ha traído consigo un desarrollo espectacular de las tecnologías de la información y de las comunicaciones, entre las que la criptografía es la reina. La criptografía de clave pública actual está basada principalmente en dos problemas que la comunidad criptográfica asume como difíciles: la factorización y el logaritmo discreto. Sin embargo, si se llegase a construir un computador cuántico lo suficientemente potente, esta dificultad no sería tal. Así pues, la computación cuántica pondría en un grave aprieto a la criptografía moderna y, puesto que la trayectoria reciente del campo sugiere que ésta podría convertirse en una realidad en un futuro no muy lejano, la comunidad criptográfica ha comenzado a explorar otras opciones para estar lista en caso de que se logre construir un computador cuántico eficiente. Esto ha dado un im- pulso a lo que se conoce como criptografía post-cuántica, aquella cuya dificultad no se vería afectada por este nuevo paradigma de computación y que está basada en los llamados problemas resistentes a la computación cuántica. La criptografía post-cuántica ha suscitado mucho interés recientemente y actualmente está en proceso de estandarización, por lo que en el momento de iniciar esta tesis resultaba relevante estudiar problemas supuestamente resistentes al computador cuántico. La parte central de esta tesis es el análisis de la dificultad del problema de los subespacios ocultos (HSP por sus siglas en inglés) y del problema de los subespacios ocultos con ruido (NHSP), dos problemas resistentes al computador cuántico según sus autores. Además de la relevancia que su supuesta resistencia a la computación cuántica les confiere, estos dos problemas son también importantes porque en su dificultad se sustenta la seguridad de las dos versiones del primer esquema de dinero cuántico de clave pública que cuenta con una prueba de seguridad. Este primer esquema es el de Aaronson-Christiano, que implementa dinero cuántico — un tipo de dinero que explota las leyes de la mecánica cuántica para crear dinero infalsificable — que cualquiera puede verificar. Los resultados obtenidos acerca de la dificultad del HSP y del NHSP tienen un impacto directo sobre la seguridad del esquema de Aaronson-Christiano, lo cual nos motivó a centrar esta tesis en estos dos problemas. El Capítulo 3 contiene nuestros resultados acerca del problema de los subespacios ocultos y está fundamentalmente basado en nuestro trabajo [Conde Pena et al.,2015]. Los autores del HSP lo definieron originalmente sobre el cuerpo binario, pero nosotros extendemos la definición a cualquier otro cuerpo finito de orden primo, siempre considerando que la instanciación es la que los autores proponen. Después de modelar el HSP con un sistema de ecuaciones con buenas propiedades, usamos técnicas de criptoanálisis algebraico para explorar el sistema en profundidad. Para el HSP sobre cualquier cuerpo que no sea el binario diseñamos un algoritmo que resuelve de manera eficiente instancias que satisfacen una cierta condición. Utilizando técnicas distintas, construimos un algoritmo heurístico, sustentado por argumentos teóricos, que resuelve eficientemente instancias del HSP sobre el cuerpo binario. Ambos algo-ritmos comprometen la dificultad del HSP siempre que las instancias del problema sean escogidas como Aaronson-Christiano proponen. Como consecuencia, nuestros algoritmos vulneran la seguridad de la versión del esquema sin ruido. El capítulo 4 contiene nuestros resultados acerca del problema de los subespacios ocultos con ruido y está fundamentalmente basado en nuestro trabajo [Conde Pena et al., 2018]. Al igual que con el HSP, extendemos la definición del NHSP a cualquier otro cuerpo de orden primo y consideramos instancias generadas como especifi- can Aaronson-Christiano. Mostramos que el NHSP se puede reducir al HSP sobre cualquier cuerpo primo que no sea el binario para ciertas instancias, mientras que el NHSP sobre el cuerpo binario se puede resolver con una probabilidad mayor de la asumida por los autores en la conjetura sobre la que la seguridad de su esquema con ruido se sustenta. Aunque nuestros resultados se obtienen desde un punto de vista puramente no cuántico, durante el desarrollo de esta tesis otro autor demostró que existe una reducción cuántica del NHSP al HSP también en el caso binario. Por tanto, la dificultad del NHSP y la seguridad del esquema de Aaronson-Christiano con ruido se han visto comprometidas por nuestros descubrimientos acerca del HSP

    Algorithms in Intersection Theory in the Plane

    Get PDF
    This thesis presents an algorithm to find the local structure of intersections of plane curves. More precisely, we address the question of describing the scheme of the quotient ring of a bivariate zero-dimensional ideal IK[x,y]I\subseteq \mathbb K[x,y], \textit{i.e.} finding the points (maximal ideals of K[x,y]/I\mathbb K[x,y]/I) and describing the regular functions on those points. A natural way to address this problem is via Gr\"obner bases as they reduce the problem of finding the points to a problem of factorisation, and the sheaf of rings of regular functions can be studied with those bases through the division algorithm and localisation. Let IK[x,y]I\subseteq \mathbb K[x,y] be an ideal generated by F\mathcal F, a subset of A[x,y]\mathbb A[x,y] with AK\mathbb A\hookrightarrow\mathbb K and K\mathbb K a field. We present an algorithm that features a quadratic convergence to find a Gr\"obner basis of II or its primary component at the origin. We introduce an m\mathfrak m-adic Newton iteration to lift the lexicographic Gr\"obner basis of any finite intersection of zero-dimensional primary components of II if mA\mathfrak m\subseteq \mathbb A is a \textit{good} maximal ideal. It relies on a structural result about the syzygies in such a basis due to Conca \textit{\&} Valla (2008), from which arises an explicit map between ideals in a stratum (or Gr\"obner cell) and points in the associated moduli space. We also qualify what makes a maximal ideal m\mathfrak m suitable for our filtration. When the field K\mathbb K is \textit{large enough}, endowed with an Archimedean or ultrametric valuation, and admits a fraction reconstruction algorithm, we use this result to give a complete m\mathfrak m-adic algorithm to recover G\mathcal G, the Gr\"obner basis of II. We observe that previous results of Lazard that use Hermite normal forms to compute Gr\"obner bases for ideals with two generators can be generalised to a set of nn generators. We use this result to obtain a bound on the height of the coefficients of G\mathcal G and to control the probability of choosing a \textit{good} maximal ideal mA\mathfrak m\subseteq\mathbb A to build the m\mathfrak m-adic expansion of G\mathcal G. Inspired by Pardue (1994), we also give a constructive proof to characterise a Zariski open set of GL2(K)\mathrm{GL}_2(\mathbb K) (with action on K[x,y]\mathbb K[x,y]) that changes coordinates in such a way as to ensure the initial term ideal of a zero-dimensional II becomes Borel-fixed when K|\mathbb K| is sufficiently large. This sharpens our analysis to obtain, when A=Z\mathbb A=\mathbb Z or A=k[t]\mathbb A=k[t], a complexity less than cubic in terms of the dimension of Q[x,y]/G\mathbb Q[x,y]/\langle \mathcal G\rangle and softly linear in the height of the coefficients of G\mathcal G. We adapt the resulting method and present the analysis to find the x,y\langle x,y\rangle-primary component of II. We also discuss the transition towards other primary components via linear mappings, called \emph{untangling} and \emph{tangling}, introduced by van der Hoeven and Lecerf (2017). The two maps form one isomorphism to find points with an isomorphic local structure and, at the origin, bind them. We give a slightly faster tangling algorithm and discuss new applications of these techniques. We show how to extend these ideas to bivariate settings and give a bound on the arithmetic complexity for certain algebras

    Toric Varieties and Numerical Algorithms for Solving Polynomial Systems

    Get PDF
    This work utilizes toric varieties for solving systems of equations. In particular, it includes two numerical homotopy continuation algorithms for numerically solving systems of equations. The first algorithm, the Cox homotopy, solves a system of equations on a compact toric variety. The Cox homotopy tracks points in the total coordinate space of the toric variety and can be viewed as a homogeneous version of the polyhedral homotopy of Huber and Sturmfels. The second algorithm, the Khovanskii homotopy, solves a system of equations on a variety in the presence of a finite Khovanskii basis. This homotopy takes advantage of Anderson’s flat degeneration to a toric variety. The Khovanskii homotopy utilizes the Newton-Okounkov body of the system, whose normalized volume gives a bound on the number of solutions to the system. Both homotopy algorithms provide the computational advantage of tracking paths in a compact space while also minimizing the total number of paths tracked. The Khovanskii homotopy is optimal with respect to the number of paths tracked, and the Cox homotopy is optimal when the system is Bernstein-general

    Algorithms for Analytic Combinatorics in Several Variables

    Get PDF
    Given a multivariate rational generating function we are interested in computing asymptotic formulas for the sequences encoded by the coefficients. In this thesis we apply the theory of analytic combinatorics in several variables (ACSV) to this problem and build algorithms which seek to compute asymptotic formulas automatically, and to aid in understanding of the theory. Under certain assumptions on a given rational multivariate generating series, we demonstrate two algorithms which compute an asymptotic formula for the coefficients. The first algorithm applies numerical methods for polynomial system solving to compute minimal points which are essential to asymptotics, while the second algorithm leverages the geometry of a so-called height map in two variables to compute asymptotics even in the absence of minimal points. We also provide software for computing gradient flows on the height maps of rational generating functions. These flows are useful for understanding the deformations of integral contours which are present in the analysis of rational generating functions

    Decidability and complexity of equivalences for simple process algebras

    Get PDF
    In this thesis I study decidability, complexity and structural properties of strong and weak bisimilarity with respect to two process algebras, Basic Process Algebras and Basic Parallel Process Algebras. The decidability of strong bisimilarity for both algebras is an established result. For the subclasses of normed BPA-processes and BPP there even exist polynomial decision procedures. The complexity of deciding strong bisimilarity for the whole class of BPP is unsatisfactory since it is not bounded by any primitive recursive function. Here we present a new approach that encodes BPP as special polynomials and expresses strong bisimulation in terms of polynomial ideals and then uses a theorem about polynomial ideals (Hilbert's Basis Theorem) and an algorithm from computer algebra (Gröbner bases) to construct a new decision procedure. For weak bisimilarity, Hirshfeld found a decision procedure for the subclasses of totally normed BPA-processes and BPP, and Esparza demonstrated a semidecision procedure for general BPP. The remaining questions are still unsolved. Here we provide some lower bounds on the computational complexity of a decision procedure that might exist. For BPP we show that the decidability problem is NP-hard (even for the class of totally normed BPP), for BPA-processes we show that the decidability problem is PSPACE-hard. Finally we study the notion of weak bisimilarity in terms of its inductive definition. We start from the relation containing all pairs of processes and then form a non-increasing chain of relations by eliminating pairs that do not satisfy a certain expansion condition. These relations are labelled by ordinal numbers and are called approximants. We know that this chain eventually converges for some a' such that =a' = =b' = = for all a' w^w, and for BPPA, a' => w.2. For some restricted classes of BPA and BPPA we show that = = =w.2

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms
    corecore