4 research outputs found

    Practical Integer Division with Karatsuba Complexity

    No full text
    Combining Karatsuba multiplication with a technique developed by Krandick for computing the high-order part of the quotient, we obtain an integer division algorithm which is only two times slower, on average, than Karatsuba multiplication. The main idea is to delay part of the dividend update until this can be done by multiplication between large balanced operands. An implementation under saclib is faster than classical multiplication at 40 words, and becomes two times faster at 250 words. Introduction The Karatsuba method for long integer multiplication [4] is probably the only asymptotically fast algorithm of practical use for integer arithmetic. Depending on the implementation, the break-even point against the classical algorithm is typically between 5 and 50 words. However, integer division with remainder does not benefit from this algorithm. Indeed, although theoretically division has the same time complexity as multiplication (see e.g. [5], p. 275), a division algorithm designe..

    Foundational Factorization Algorithms for the Efficient Roundoff-Error-Free Solution of Optimization Problems

    Get PDF
    LU and Cholesky factorizations play a central role in solving linear and mixed-integer programs. In many documented cases, the round-off errors accrued during the construction and implementation of these factorizations cause the misclassification of suboptimal solutions as optimal and infeasible problems as feasible and vice versa. Such erroneous outputs bring the reliability of optimization solvers into question and, therefore, it is imperative to eliminate these round off errors altogether and to do so efficiently to ensure practicality. Firstly, this work introduces two round off-error-free factorizations (REF) constructed exclusively in integer arithmetic: the REF LU and Cholesky factorizations. Additionally, it develops supplementary integer-preserving substitution algorithms, thereby providing a complete tool set for solving systems of linear equations (SLEs) exactly and efficiently. An inherent property of the REF factorization algorithms is that their entries' bit-length--- i.e., the number of bits required for expression--- is bounded polynomially. Unlike the exact rational arithmetic methods used in practice, however, the algorithms herein presented do not require any greatest common divisor operations to guarantee this pivotal property. Secondly, this work derives various useful theoretical results and details computational tests to demonstrate that the REF factorization framework is considerably superior to the rational arithmetic LU factorization approach in computational performance and storage requirements. This is significant because the latter approach is the solution validation tool of choice of state-of-the-art exact linear programming solvers due to its ability to handle both numerically difficult and intricate problems. An additional theoretical contribution and further computational tests also demonstrate the predominance of the featured framework over Q-matrices, which comprise an alternative integer-preserving approach relying on the basis adjunct matrix. Thirdly, this work develops special algorithms for updating the REF factorizations. This is necessary because applying the traditional approach to the REF factorizations is inefficient in terms of entry growth and computational effort. In fact, these inefficiencies virtually wipe out all the computational savings commonly expected of factorization updates. Hence, the current work develops REF update algorithms that differ significantly from their traditional counterparts. The featured REF updates are column/row addition, deletion, and replacement
    corecore