43,618 research outputs found

    Improved Incremental Randomized Delaunay Triangulation

    Get PDF
    We propose a new data structure to compute the Delaunay triangulation of a set of points in the plane. It combines good worst case complexity, fast behavior on real data, and small memory occupation. The location structure is organized into several levels. The lowest level just consists of the triangulation, then each level contains the triangulation of a small sample of the levels below. Point location is done by marching in a triangulation to determine the nearest neighbor of the query at that level, then the march restarts from that neighbor at the level below. Using a small sample (3%) allows a small memory occupation; the march and the use of the nearest neighbor to change levels quickly locate the query.Comment: 19 pages, 7 figures Proc. 14th Annu. ACM Sympos. Comput. Geom., 106--115, 199

    Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs

    Full text link
    The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.Comment: 27 pages, LaTeX; corrected typos in Appendix equations A.10 and A.1

    The K-Server Dual and Loose Competitiveness for Paging

    Full text link
    This paper has two results. The first is based on the surprising observation that the well-known ``least-recently-used'' paging algorithm and the ``balance'' algorithm for weighted caching are linear-programming primal-dual algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that generalizes them both and has an optimal performance guarantee for weighted caching. For the second result, the paper presents empirical studies of paging algorithms, documenting that in practice, on ``typical'' cache sizes and sequences, the performance of paging strategies are much better than their worst-case analyses in the standard model suggest. The paper then presents theoretical results that support and explain this. For example: on any input sequence, with almost all cache sizes, either the performance guarantee of least-recently-used is O(log k) or the fault rate (in an absolute sense) is insignificant. Both of these results are strengthened and generalized in``On-line File Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA (1991

    Quantum Computers and Dissipation

    Get PDF
    We analyse dissipation in quantum computation and its destructive impact on efficiency of quantum algorithms. Using a general model of decoherence, we study the time evolution of a quantum register of arbitrary length coupled with an environment of arbitrary coherence length. We discuss relations between decoherence and computational complexity and show that the quantum factorization algorithm must be modified in order to be regarded as efficient and realistic.Comment: 20 pages, Latex, 7 Postscript figure

    Probabilistic alternatives for competitive analysis

    Get PDF
    In the last 20 years competitive analysis has become the main tool for analyzing the quality of online algorithms. Despite of this, competitive analysis has also been criticized: it sometimes cannot discriminate between algorithms that exhibit significantly different empirical behavior or it even favors an algorithm that is worse from an empirical point of view. Therefore, there have been several approaches to circumvent these drawbacks. In this survey, we discuss probabilistic alternatives for competitive analysis.operations research and management science;
    • …
    corecore