101 research outputs found

    TOPICS IN COMPUTATIONAL NUMBER THEORY AND CRYPTANALYSIS - On Simultaneous Chinese Remaindering, Primes, the MiNTRU Assumption, and Functional Encryption

    Get PDF
    This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem asks to find all integer solutions to a given system of congruences where each congruence is defined by one modulus and one remainder. The Simultaneous Chinese Remainder Problem is a direct generalization of its classical counterpart where for each modulus the single remainder is replaced by a non-empty set of remainders. The solutions of a Simultaneous Chinese Remainder Problem instance are completely defined by a set of minimal positive solutions, called primitive solutions, which are upper bounded by the lowest common multiple of the considered moduli. However, contrary to its classical counterpart, which has at most one primitive solution, the Simultaneous Chinese Remainder Problem may have an exponential number of primitive solutions, so that any general-purpose solving algorithm requires exponential time. Furthermore, through a direct reduction from the 3-SAT problem, we prove first that deciding whether a solution exists is NP-complete, and second that if the existence of solutions is guaranteed, then deciding whether a solution of a particular size exists is also NP-complete. Despite these discouraging results, we studied methods to find the minimal solution to Simultaneous Chinese Remainder Problem instances and we discovered some interesting statistical properties. A Conjecture On Primes In Arithmetic Progressions And Geometric Intervals: Dirichlet’s theorem on primes in arithmetic progressions states that for any positive integer q and any coprime integer a, there are infinitely many primes in the arithmetic progression a + nq (n ∈ N), however, it does not indicate where those primes can be found. Linnik’s theorem predicts that the first such prime p0 can be found in the interval [0;q^L] where L denotes an absolute and explicitly computable constant. Albeit only L = 5 has been proven, it is widely believed that L ≤ 2. We generalize Linnik’s theorem by conjecturing that for any integers q ≥ 2, 1 ≤ a ≤ q − 1 with gcd(q, a) = 1, and t ≥ 1, there exists a prime p such that p ∈ [q^t;q^(t+1)] and p ≡ a mod q. Subsequently, we prove the conjecture for all sufficiently large exponent t, we computationally verify it for all sufficiently small modulus q, and we investigate its relation to other mathematical results such as Carmichael’s totient function conjecture. On The (M)iNTRU Assumption Over Finite Rings: The inhomogeneous NTRU (iNTRU) assumption is a recent computational hardness assumption, which claims that first adding a random low norm error vector to a known gadget vector and then multiplying the result with a secret vector is sufficient to obfuscate the considered secret vector. The matrix inhomogeneous NTRU (MiNTRU) assumption essentially replaces vectors with matrices. Albeit those assumptions strongly remind the well-known learning-with-errors (LWE) assumption, their hardness has not been studied in full detail yet. We provide an elementary analysis of the corresponding decision assumptions and break them in their basis case using an elementary q-ary lattice reduction attack. Concretely, we restrict our study to vectors over finite integer rings, which leads to a problem that we call (M)iNTRU. Starting from a challenge vector, we construct a particular q-ary lattice that contains an unusually short vector whenever the challenge vector follows the (M)iNTRU distribution. Thereby, elementary lattice reduction allows us to distinguish a random challenge vector from a synthetically constructed one. A Conditional Attack Against Functional Encryption Schemes: Functional encryption emerged as an ambitious cryptographic paradigm supporting function evaluations over encrypted data revealing the result in plain. Therein, the result consists either in a valid output or a special error symbol. We develop a conditional selective chosen-plaintext attack against the indistinguishability security notion of functional encryption. Intuitively, indistinguishability in the public-key setting is based on the premise that no adversary can distinguish between the encryptions of two known plaintext messages. As functional encryption allows us to evaluate functions over encrypted messages, the adversary is restricted to evaluations resulting in the same output only. To ensure consistency with other primitives, the decryption procedure of a functional encryption scheme is allowed to fail and output an error. We observe that an adversary may exploit the special role of these errors to craft challenge messages that can be used to win the indistinguishability game. Indeed, the adversary can choose the messages such that their functional evaluation leads to the common error symbol, but their intermediate computation values differ. A formal decomposition of the underlying functionality into a mathematical function and an error trigger reveals this dichotomy. Finally, we outline the impact of this observation on multiple DDH-based inner-product functional encryption schemes when we restrict them to bounded-norm evaluations only

    Formation Shape Control Based on Distance Measurements Using Lie Bracket Approximations

    Get PDF
    We study the problem of distance-based formation control in autonomous multi-agent systems in which only distance measurements are available. This means that the target formations as well as the sensed variables are both determined by distances. We propose a fully distributed distance-only control law, which requires neither a time synchronization of the agents nor storage of measured data. The approach is applicable to point agents in the Euclidean space of arbitrary dimension. Under the assumption of infinitesimal rigidity of the target formations, we show that the proposed control law induces local uniform asymptotic stability. Our approach involves sinusoidal perturbations in order to extract information about the negative gradient direction of each agent's local potential function. An averaging analysis reveals that the gradient information originates from an approximation of Lie brackets of certain vector fields. The method is based on a recently introduced approach to the problem of extremum seeking control. We discuss the relation in the paper

    The Design and Implementation of a High-Performance Polynomial System Solver

    Get PDF
    This thesis examines the algorithmic and practical challenges of solving systems of polynomial equations. We discuss the design and implementation of triangular decomposition to solve polynomials systems exactly by means of symbolic computation. Incremental triangular decomposition solves one equation from the input list of polynomials at a time. Each step may produce several different components (points, curves, surfaces, etc.) of the solution set. Independent components imply that the solving process may proceed on each component concurrently. This so-called component-level parallelism is a theoretical and practical challenge characterized by irregular parallelism. Parallelism is not an algorithmic property but rather a geometrical property of the particular input system’s solution set. Despite these challenges, we have effectively applied parallel computing to triangular decomposition through the layering and cooperation of many parallel code regions. This parallel computing is supported by our generic object-oriented framework based on the dynamic multithreading paradigm. Meanwhile, the required polynomial algebra is sup- ported by an object-oriented framework for algebraic types which allows type safety and mathematical correctness to be determined at compile-time. Our software is implemented in C/C++ and have extensively tested the implementation for correctness and performance on over 3000 polynomial systems that have arisen in practice. The parallel framework has been re-used in the implementation of Hensel factorization as a parallel pipeline to compute roots of a polynomial with multivariate power series coefficients. Hensel factorization is one step toward computing the non-trivial limit points of quasi-components

    On the Complexity of Computing with Planar Algebraic Curves

    Full text link
    In this paper, we give improved bounds for the computational complexity of computing with planar algebraic curves. More specifically, for arbitrary coprime polynomials ff, g∈Z[x,y]g \in \mathbb{Z}[x,y] and an arbitrary polynomial h∈Z[x,y]h \in \mathbb{Z}[x,y], each of total degree less than nn and with integer coefficients of absolute value less than 2τ2^\tau, we show that each of the following problems can be solved in a deterministic way with a number of bit operations bounded by O~(n6+n5τ)\tilde{O}(n^6+n^5\tau), where we ignore polylogarithmic factors in nn and τ\tau: (1) The computation of isolating regions in C2\mathbb{C}^2 for all complex solutions of the system f=g=0f = g = 0, (2) the computation of a separating form for the solutions of f=g=0f = g = 0, (3) the computation of the sign of hh at all real valued solutions of f=g=0f = g = 0, and (4) the computation of the topology of the planar algebraic curve C\mathcal{C} defined as the real valued vanishing set of the polynomial ff. Our bound improves upon the best currently known bounds for the first three problems by a factor of n2n^2 or more and closes the gap to the state-of-the-art randomized complexity for the last problem.Comment: 41 pages, 1 figur

    Digital watermark technology in security applications

    Get PDF
    With the rising emphasis on security and the number of fraud related crimes around the world, authorities are looking for new technologies to tighten security of identity. Among many modern electronic technologies, digital watermarking has unique advantages to enhance the document authenticity. At the current status of the development, digital watermarking technologies are not as matured as other competing technologies to support identity authentication systems. This work presents improvements in performance of two classes of digital watermarking techniques and investigates the issue of watermark synchronisation. Optimal performance can be obtained if the spreading sequences are designed to be orthogonal to the cover vector. In this thesis, two classes of orthogonalisation methods that generate binary sequences quasi-orthogonal to the cover vector are presented. One method, namely "Sorting and Cancelling" generates sequences that have a high level of orthogonality to the cover vector. The Hadamard Matrix based orthogonalisation method, namely "Hadamard Matrix Search" is able to realise overlapped embedding, thus the watermarking capacity and image fidelity can be improved compared to using short watermark sequences. The results are compared with traditional pseudo-randomly generated binary sequences. The advantages of both classes of orthogonalisation inethods are significant. Another watermarking method that is introduced in the thesis is based on writing-on-dirty-paper theory. The method is presented with biorthogonal codes that have the best robustness. The advantage and trade-offs of using biorthogonal codes with this watermark coding methods are analysed comprehensively. The comparisons between orthogonal and non-orthogonal codes that are used in this watermarking method are also made. It is found that fidelity and robustness are contradictory and it is not possible to optimise them simultaneously. Comparisons are also made between all proposed methods. The comparisons are focused on three major performance criteria, fidelity, capacity and robustness. aom two different viewpoints, conclusions are not the same. For fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal codes has very strong advantage to preserve image fidelity and the advantage of capacity performance is also significant. However, from the power ratio point of view, the orthogonalisation methods demonstrate significant advantage on capacity and robustness. The conclusions are contradictory but together, they summarise the performance generated by different design considerations. The synchronisation of watermark is firstly provided by high contrast frames around the watermarked image. The edge detection filters are used to detect the high contrast borders of the captured image. By scanning the pixels from the border to the centre, the locations of detected edges are stored. The optimal linear regression algorithm is used to estimate the watermarked image frames. Estimation of the regression function provides rotation angle as the slope of the rotated frames. The scaling is corrected by re-sampling the upright image to the original size. A theoretically studied method that is able to synchronise captured image to sub-pixel level accuracy is also presented. By using invariant transforms and the "symmetric phase only matched filter" the captured image can be corrected accurately to original geometric size. The method uses repeating watermarks to form an array in the spatial domain of the watermarked image and the the array that the locations of its elements can reveal information of rotation, translation and scaling with two filtering processes

    Cache-Friendly, Modular and Parallel Schemes For Computing Subresultant Chains

    Get PDF
    The RegularChains library in Maple offers a collection of commands for solving polynomial systems symbolically with taking advantage of the theory of regular chains. The primary goal of this thesis is algorithmic contributions, in particular, to high-performance computational schemes for subresultant chains and underlying routines to extend that of RegularChains in a C/C++ open-source library. Subresultants are one of the most fundamental tools in computer algebra. They are at the core of numerous algorithms including, but not limited to, polynomial GCD computations, polynomial system solving, and symbolic integration. When the subresultant chain of two polynomials is involved in a client procedure, not all polynomials of the chain, or not all coefficients of a given subresultant, may be needed. Based on that observation, we design so-called speculative and caching strategies which yield great performance improvements within our polynomial system solver. Our implementation of these techniques has been highly optimized. We have implemented optimized core arithmetic routines and multithreaded subresultant algorithms for univariate, bivariate and multivariate polynomials. We further examine memory access patterns and data locality for computing subresultants of multivariate polynomials, and study different optimization techniques for the fraction-free LU decomposition algorithm to compute subresultants based on determinant of Bezout matrices. Our code is publicly available at www.bpaslib.org as part of the Basic Polynomial Algebra Subprograms (BPAS) library that is mainly written in C, with concurrency support and user interfaces written in C++

    Wafer Stage Motion Control:from Experiment Design to Robust Performance

    Get PDF
    • …
    corecore