5 research outputs found

    A lower bound for linear approximate compaction

    Get PDF
    The {\em λ\lambda-approximate compaction} problem is: given an input array of nn values, each either 0 or 1, place each value in an output array so that all the 1's are in the first (1+λ)k(1+\lambda)k array locations, where kk is the number of 1's in the input. λ\lambda is an accuracy parameter. This problem is of fundamental importance in parallel computation because of its applications to processor allocation and approximate counting. When λ\lambda is a constant, the problem is called {\em Linear Approximate Compaction} (LAC). On the CRCW PRAM model, %there is an algorithm that solves approximate compaction in \order{(\log\log n)^3} time for λ=1loglogn\lambda = \frac{1}{\log\log n}, using n(loglogn)3\frac{n}{(\log\log n)^3} processors. Our main result shows that this is close to the best possible. Specifically, we prove that LAC requires %Ω(loglogn)\Omega(\log\log n) time using \order{n} processors. We also give a tradeoff between λ\lambda and the processing time. For ϵ<1\epsilon < 1, and λ=nϵ\lambda = n^{\epsilon}, the time required is Ω(log1ϵ)\Omega(\log \frac{1}{\epsilon})

    Progress Report : 1991 - 1994

    Get PDF

    22. Workshop Komplexitätstheorie und effiziente Algorithmen

    Get PDF
    his publication contains abstracts of the 22nd workshop on complexity theory and efficient algorithms. The workshop was held on February 8, 1994, at the Max-Planck-Institut für Informatik, Saarbrücken, Germany

    Approximate and Exact Deterministic Parallel Selection

    No full text
    The selection problem of size n is, given a set of n elements drawn from an ordered universe and an integer k with 1 k n, to identify the kth smallest element in the set. We study approximate and exact selection on deterministic concurrent-read concurrent-write parallel RAMs, where approximate selection with relative accuracy ? 0 asks for any element whose true rank differs from k by at most n. Our main results are: (1) Exact selection problems of size n can be solved in O(logn=log log n) time with O(n log log n=logn) processors. This running time is the best possible (using only a polynomial number of processors) , and the number of processors is optimal for the given running time (optimal speedup); the best previous algorithm achieves optimal speedup with a running time of O(logn log n=log log n). (2) For all t (log log n) 4 log n, approximate selection problems of size n can be solved in O(t) time with optimal speedup with relative accuracy 2 \Gammat loglog log n=(log logn) ..
    corecore