175 research outputs found
An O(M(n) log n) algorithm for the Jacobi symbol
The best known algorithm to compute the Jacobi symbol of two n-bit integers
runs in time O(M(n) log n), using Sch\"onhage's fast continued fraction
algorithm combined with an identity due to Gauss. We give a different O(M(n)
log n) algorithm based on the binary recursive gcd algorithm of Stehl\'e and
Zimmermann. Our implementation - which to our knowledge is the first to run in
time O(M(n) log n) - is faster than GMP's quadratic implementation for inputs
larger than about 10000 decimal digits.Comment: Submitted to ANTS IX (Nancy, July 2010
How Fast Can We Multiply Large Integers on an Actual Computer?
We provide two complexity measures that can be used to measure the running
time of algorithms to compute multiplications of long integers. The random
access machine with unit or logarithmic cost is not adequate for measuring the
complexity of a task like multiplication of long integers. The Turing machine
is more useful here, but fails to take into account the multiplication
instruction for short integers, which is available on physical computing
devices. An interesting outcome is that the proposed refined complexity
measures do not rank the well known multiplication algorithms the same way as
the Turing machine model.Comment: To appear in the proceedings of Latin 2014. Springer LNCS 839
Combining All Pairs Shortest Paths and All Pairs Bottleneck Paths Problems
We introduce a new problem that combines the well known All Pairs Shortest
Paths (APSP) problem and the All Pairs Bottleneck Paths (APBP) problem to
compute the shortest paths for all pairs of vertices for all possible flow
amounts. We call this new problem the All Pairs Shortest Paths for All Flows
(APSP-AF) problem. We firstly solve the APSP-AF problem on directed graphs with
unit edge costs and real edge capacities in
time,
where is the number of vertices, is the number of distinct edge
capacities (flow amounts) and is the time taken
to multiply two -by- matrices over a ring. Secondly we extend the problem
to graphs with positive integer edge costs and present an algorithm with
worst case time complexity, where is
the upper bound on edge costs
Chains of large gaps between primes
Let denote the -th prime, and for any and sufficiently
large , define the quantity which measures the occurrence of
chains of consecutive large gaps of primes. Recently, with Green and
Konyagin, the authors showed that for sufficiently large . In this
note, we combine the arguments in that paper with the Maier matrix method to
show that for any fixed and sufficiently large . The
implied constant is effective and independent of .Comment: 16 pages, no figure
A one-dimensional Vlasov-Maxwell equilibrium for the force-free Harris sheet
In this paper the first non-linear force-free Vlasov-Maxwell equilibrium is
presented. One component of the equilibrium magnetic field has the same spatial
structure as the Harris sheet, but whereas the Harris sheet is kept in force
balance by pressure gradients, in the force-free solution presented here force
balance is maintained by magnetic shear. Magnetic pressure, plasma pressure and
plasma density are constant. The method used to find the equilibrium is based
on the analogy of the one-dimensional Vlasov-Maxwell equilibrium problem to the
motion of a pseudo-particle in a two-dimensional conservative potential. This
potential is equivalent to one of the diagonal components of the plasma
pressure tensor. After finding the appropriate functional form for this
pressure tensor component, the corresponding distribution functions can be
found using a Fourier transform method. The force-free solution can be
generalized to a complete family of equilibria that describe the transition
between the purely pressure-balanced Harris sheet to the force-free Harris
sheet.Comment: 10 pages, 2 figures, submitted to PRL, revised versio
Gradual sub-lattice reduction and a new complexity for factoring polynomials
We present a lattice algorithm specifically designed for some classical
applications of lattice reduction. The applications are for lattice bases with
a generalized knapsack-type structure, where the target vectors are boundably
short. For such applications, the complexity of the algorithm improves
traditional lattice reduction by replacing some dependence on the bit-length of
the input vectors by some dependence on the bound for the output vectors. If
the bit-length of the target vectors is unrelated to the bit-length of the
input, then our algorithm is only linear in the bit-length of the input
entries, which is an improvement over the quadratic complexity floating-point
LLL algorithms. To illustrate the usefulness of this algorithm we show that a
direct application to factoring univariate polynomials over the integers leads
to the first complexity bound improvement since 1984. A second application is
algebraic number reconstruction, where a new complexity bound is obtained as
well
Computing with and without arbitrary large numbers
In the study of random access machines (RAMs) it has been shown that the
availability of an extra input integer, having no special properties other than
being sufficiently large, is enough to reduce the computational complexity of
some problems. However, this has only been shown so far for specific problems.
We provide a characterization of the power of such extra inputs for general
problems. To do so, we first correct a classical result by Simon and Szegedy
(1992) as well as one by Simon (1981). In the former we show mistakes in the
proof and correct these by an entirely new construction, with no great change
to the results. In the latter, the original proof direction stands with only
minor modifications, but the new results are far stronger than those of Simon
(1981). In both cases, the new constructions provide the theoretical tools
required to characterize the power of arbitrary large numbers.Comment: 12 pages (main text) + 30 pages (appendices), 1 figure. Extended
abstract. The full paper was presented at TAMC 2013. (Reference given is for
the paper version, as it appears in the proceedings.
Chromatic number, clique subdivisions, and the conjectures of Haj\'os and Erd\H{o}s-Fajtlowicz
For a graph , let denote its chromatic number and
denote the order of the largest clique subdivision in . Let H(n) be the
maximum of over all -vertex graphs . A famous
conjecture of Haj\'os from 1961 states that for every
graph . That is, for all positive integers . This
conjecture was disproved by Catlin in 1979. Erd\H{o}s and Fajtlowicz further
showed by considering a random graph that for some
absolute constant . In 1981 they conjectured that this bound is tight up
to a constant factor in that there is some absolute constant such that
for all -vertex graphs . In this
paper we prove the Erd\H{o}s-Fajtlowicz conjecture. The main ingredient in our
proof, which might be of independent interest, is an estimate on the order of
the largest clique subdivision which one can find in every graph on
vertices with independence number .Comment: 14 page
- âŠ