403 research outputs found
A Still Simpler Way of Introducing the Interior-Point Method for Linear Programming
Linear programming is now included in algorithm undergraduate and
postgraduate courses for computer science majors. We give a self-contained
treatment of an interior-point method which is particularly tailored to the
typical mathematical background of CS students. In particular, only limited
knowledge of linear algebra and calculus is assumed.Comment: Updates and replaces arXiv:1412.065
Computing Real Roots of Real Polynomials
Computing the roots of a univariate polynomial is a fundamental and
long-studied problem of computational algebra with applications in mathematics,
engineering, computer science, and the natural sciences. For isolating as well
as for approximating all complex roots, the best algorithm known is based on an
almost optimal method for approximate polynomial factorization, introduced by
Pan in 2002. Pan's factorization algorithm goes back to the splitting circle
method from Schoenhage in 1982. The main drawbacks of Pan's method are that it
is quite involved and that all roots have to be computed at the same time. For
the important special case, where only the real roots have to be computed, much
simpler methods are used in practice; however, they considerably lag behind
Pan's method with respect to complexity.
In this paper, we resolve this discrepancy by introducing a hybrid of the
Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than
Pan's method, but achieves a run-time comparable to it. Our algorithm computes
isolating intervals for the real roots of any real square-free polynomial,
given by an oracle that provides arbitrary good approximations of the
polynomial's coefficients. ANEWDSC can also be used to only isolate the roots
in a given interval and to refine the isolating intervals to an arbitrary small
size; it achieves near optimal complexity for the latter task.Comment: to appear in the Journal of Symbolic Computatio
The Cost of Address Translation
Modern computers are not random access machines (RAMs). They have a memory
hierarchy, multiple cores, and virtual memory. In this paper, we address the
computational cost of address translation in virtual memory. Starting point for
our work is the observation that the analysis of some simple algorithms (random
scan of an array, binary search, heapsort) in either the RAM model or the EM
model (external memory model) does not correctly predict growth rates of actual
running times. We propose the VAT model (virtual address translation) to
account for the cost of address translations and analyze the algorithms
mentioned above and others in the model. The predictions agree with the
measurements. We also analyze the VAT-cost of cache-oblivious algorithms.Comment: A extended abstract of this paper was published in the proceedings of
ALENEX13, New Orleans, US
Arbitrary weight changes in dynamic trees
We describe an implementation of dynamic weighted trees, called D-trees. Given a set left{ B_{0},...,B_{n}right} of objects and access frequencies q_{0},q_{1},...,q_{n} one wants to store the objects in a binary tree such that average access is nearly optimal and changes of the access frequencies require only small changes of the tree. In D-trees the changes are always limited to the path of search and hence update time is at most proportional to search time
Cache-Oblivious VAT-Algorithms
The VAT-model (virtual address translation model) extends the EM-model
(external memory model) and takes the cost of address translation in virtual
memories into account. In this model, the cost of a single memory access may be
logarithmic in the largest address used. We show that the VAT-cost of
cache-oblivious algorithms is only by a constant factor larger than their
EM-cost; this requires a somewhat more stringent tall cache assumption as for
the EM-model
An efficient algorithm for constructing nearly optimal prefix codes
A new algorithm for constructing nearly optimal prefix codes in the case of unequal letter costs and unequal probabilities is presented. A bound on the maximal deviation from the optimum is derived and numerical examples are given. The algorithm has running time O(t·n) where t is the number of letters and n is the number of probabilities
Sorting presorted files
A new sorting algorithm is presented. Its running time is O(n(1+10g(F/n)) where F = I{(f,j); i < j and xi < xj}1 is the total number of inversions in the input sequence xn xn_1 xn_2 ... x2 xl• In other words, presorted sequences are sorted quickly, and completely unsorted sequences are sorted in O(n log n) steps. Note that F < n2/2 always. Furthermore, the constant of proportionality is fairly small and hence the sorting method is competitive with existing methods for not too large n
Engineering DFS-Based Graph Algorithms
Depth-first search (DFS) is the basis for many efficient graph algorithms. We
introduce general techniques for the efficient implementation of DFS-based
graph algorithms and exemplify them on three algorithms for computing strongly
connected components. The techniques lead to speed-ups by a factor of two to
three compared to the implementations provided by LEDA and BOOST.
We have obtained similar speed-ups for biconnected components algorithms. We
also compare the graph data types of LEDA and BOOST
- …