272 research outputs found

    Gradual sub-lattice reduction and a new complexity for factoring polynomials

    Get PDF
    We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well

    The Role of Benchmarking in Symbolic Computation:(Position Paper)

    Get PDF
    There is little doubt that, in the minds of most symbolic computation researchers, the ideal paper consists of a problem statement, a new algorithm, a complexity analysis and preferably a few validating examples. There are many such great papers. This paradigm has served computer algebra well for many years, and indeed continues to do so where it is applicable. However, it is much less applicable to sparse problems, where there are many NP-hardness results, or to many problems coming from algebraic geometry, where the worst-case complexity seems to be rare.We argue that, in these cases, the field should take a leaf out of the practices of the SAT-solving community, and adopt systematic benchmarking, and benchmarking contests, as a way measuring (and stimulating) progress. This would involve a change of culture

    Factorization in Cybersecurity: a Dual Role of Defense and Vulnerability in the Age of Quantum Computing

    Get PDF
    One of the most critical components of modern cryptography and thus cybersecurity is the ability to factor large integers quickly and efficiently. RSA encryption, one of the most used types, is based largely on the assumption that factoring for large numbers is computationally infeasible for humans and computers alike. However, with quantum computers, people can use an algorithm like Shor’s algorithm to perform the same task exponentially faster than any normal device ever could. This investigation will go into the strength and vulnerability of RSA encryption using the power of factorization in an age of quantum computers.We start by looking at the foundations of both classical and quantum factoring with greater detail at number field sieve (NFS) and Shor’s. We examine the mathematical background of each topic and the associated algorithms. We conclude with theoretical analysis and experimental simulations that address the difficulty and implications of the above-mentioned algorithms in cryptography. The final thing that I will be discussing is where quantum computing is at present and how this could pose a threat to the current type of cryptographic systems, we use every day. I will be mentioning how we need post-quantum cryptography and how people are currently creating algorithms that are designed to be attack-resistant even to large-scale quantum computers. This investigation has shown the changing dynamics of cybersecurity in the quantum era and helps us understand the challenges and the need to innovate the current cryptographic systems

    Detecting Simultaneous Integer Relations for Several Real Vectors

    Full text link
    An algorithm which either finds an nonzero integer vector m{\mathbf m} for given tt real nn-dimensional vectors x1,...,xt{\mathbf x}_1,...,{\mathbf x}_t such that xiTm=0{\mathbf x}_i^T{\mathbf m}=0 or proves that no such integer vector with norm less than a given bound exists is presented in this paper. The cost of the algorithm is at most O(n4+n3log⁥λ(X)){\mathcal O}(n^4 + n^3 \log \lambda(X)) exact arithmetic operations in dimension nn and the least Euclidean norm λ(X)\lambda(X) of such integer vectors. It matches the best complexity upper bound known for this problem. Experimental data show that the algorithm is better than an already existing algorithm in the literature. In application, the algorithm is used to get a complete method for finding the minimal polynomial of an unknown complex algebraic number from its approximation, which runs even faster than the corresponding \emph{Maple} built-in function.Comment: 10 page

    On The Applications of Lifting Techniques

    Get PDF
    Lifting techniques are some of the main tools in solving a variety of different computational problems related to the field of computer algebra. In this thesis, we will consider two fundamental problems in the fields of computational algebraic geometry and number theory, trying to find more efficient algorithms to solve such problems. The first problem, solving systems of polynomial equations, is one of the most fundamental problems in the field of computational algebraic geometry. In this thesis, We discuss how to solve bivariate polynomial systems over either k(T ) or Q using a combination of lifting and modular composition techniques. We will show that one can find an equiprojectable decomposition of a bivariate polynomial system in a better time complexity than the best known algorithms in the field, both in theory and practice. The second problem, polynomial factorization over number fields, is one of the oldest problems in number theory. It has lots of applications in many other related problems and there have been lots of attempts to solve the problem efficiently, at least, in practice. Finding p-adic factors of a univariate polynomial over a number field uses lifting techniques. Improving this step can reduce the total running time of the factorization in practice. We first introduce a multivariate version of the Belabas factorization algorithm over number fields. Then we will compare the running time complexity of the factorization problem using two different representations of a number field, univariate vs multivariate, and at the end as an application, we will show the improvement gained in computing the splitting fields of a univariate polynomial over rational field

    The Role of Benchmarking in Symbolic Computation:(Position Paper)

    Get PDF
    There is little doubt that, in the minds of most symbolic computation researchers, the ideal paper consists of a problem statement, a new algorithm, a complexity analysis and preferably a few validating examples. There are many such great papers. This paradigm has served computer algebra well for many years, and indeed continues to do so where it is applicable. However, it is much less applicable to sparse problems, where there are many NP-hardness results, or to many problems coming from algebraic geometry, where the worst-case complexity seems to be rare.We argue that, in these cases, the field should take a leaf out of the practices of the SAT-solving community, and adopt systematic benchmarking, and benchmarking contests, as a way measuring (and stimulating) progress. This would involve a change of culture
    • 

    corecore