12,795 research outputs found

    A Lower Bound Technique for Communication in BSP

    Get PDF
    Communication is a major factor determining the performance of algorithms on current computing systems; it is therefore valuable to provide tight lower bounds on the communication complexity of computations. This paper presents a lower bound technique for the communication complexity in the bulk-synchronous parallel (BSP) model of a given class of DAG computations. The derived bound is expressed in terms of the switching potential of a DAG, that is, the number of permutations that the DAG can realize when viewed as a switching network. The proposed technique yields tight lower bounds for the fast Fourier transform (FFT), and for any sorting and permutation network. A stronger bound is also derived for the periodic balanced sorting network, by applying this technique to suitable subnetworks. Finally, we demonstrate that the switching potential captures communication requirements even in computational models different from BSP, such as the I/O model and the LPRAM

    Communication Lower Bounds for Distributed-Memory Computations

    Get PDF
    In this paper we propose a new approach to the study of the communication requirements of distributed computations, which advocates for the removal of the restrictive assumptions under which earlier results were derived. We illustrate our approach by giving tight lower bounds on the communication complexity required to solve several computational problems in a distributed-memory parallel machine, namely standard matrix multiplication, stencil computations, comparison sorting, and the Fast Fourier Transform. Our bounds rely only on a mild assumption on work distribution, and significantly strengthen previous results which require either the computation to be balanced among the processors, or specific initial distributions of the input data, or an upper bound on the size of processors\u27 local memories

    Cumulative Memory Lower Bounds for Randomized and Quantum Computation

    Get PDF
    Cumulative memory -- the sum of space used per step over the duration of a computation -- is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/poly(n)1/\text{poly}(n) requires cumulative memory Ω~(n2)\tilde \Omega(n^2), any classical matrix multiplication algorithm requires cumulative memory Ω(n6/T)\Omega(n^6/T), any quantum sorting circuit requires cumulative memory Ω(n3/T)\Omega(n^3/T), and any quantum circuit that finds kk disjoint collisions in a random function requires cumulative memory Ω(k3n/T2)\Omega(k^3n/T^2).Comment: 42 pages, 4 figures, accepted to track A of ICALP 202

    Average-Case Complexity of Shellsort

    Full text link
    We prove a general lower bound on the average-case complexity of Shellsort: the average number of data-movements (and comparisons) made by a pp-pass Shellsort for any incremental sequence is \Omega (pn^{1 + 1/p) for all p≀log⁥np \leq \log n. Using similar arguments, we analyze the average-case complexity of several other sorting algorithms.Comment: 11 pages. Submitted to ICALP'9

    On the Distributed Complexity of Large-Scale Graph Computations

    Full text link
    Motivated by the increasing need to understand the distributed algorithmic foundations of large-scale graph computations, we study some fundamental graph problems in a message-passing model for distributed computing where k≄2k \geq 2 machines jointly perform computations on graphs with nn nodes (typically, n≫kn \gg k). The input graph is assumed to be initially randomly partitioned among the kk machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication {\em rounds} of the computation. Our main contribution is the {\em General Lower Bound Theorem}, a theorem that can be used to show non-trivial lower bounds on the round complexity of distributed large-scale data computations. The General Lower Bound Theorem is established via an information-theoretic approach that relates the round complexity to the minimal amount of information required by machines to solve the problem. Our approach is generic and this theorem can be used in a "cookbook" fashion to show distributed lower bounds in the context of several problems, including non-graph problems. We present two applications by showing (almost) tight lower bounds for the round complexity of two fundamental graph problems, namely {\em PageRank computation} and {\em triangle enumeration}. Our approach, as demonstrated in the case of PageRank, can yield tight lower bounds for problems (including, and especially, under a stochastic partition of the input) where communication complexity techniques are not obvious. Our approach, as demonstrated in the case of triangle enumeration, can yield stronger round lower bounds as well as message-round tradeoffs compared to approaches that use communication complexity techniques

    Quantum complexities of ordered searching, sorting, and element distinctness

    Full text link
    We consider the quantum complexities of the following three problems: searching an ordered list, sorting an un-ordered list, and deciding whether the numbers in a list are all distinct. Letting N be the number of elements in the input list, we prove a lower bound of \frac{1}{\pi}(\ln(N)-1) accesses to the list elements for ordered searching, a lower bound of \Omega(N\log{N}) binary comparisons for sorting, and a lower bound of \Omega(\sqrt{N}\log{N}) binary comparisons for element distinctness. The previously best known lower bounds are {1/12}\log_2(N) - O(1) due to Ambainis, \Omega(N), and \Omega(\sqrt{N}), respectively. Our proofs are based on a weighted all-pairs inner product argument. In addition to our lower bound results, we give a quantum algorithm for ordered searching using roughly 0.631 \log_2(N) oracle accesses. Our algorithm uses a quantum routine for traversing through a binary search tree faster than classically, and it is of a nature very different from a faster algorithm due to Farhi, Goldstone, Gutmann, and Sipser.Comment: This new version contains new results. To appear at ICALP '01. Some of the results have previously been presented at QIP '01. This paper subsumes the papers quant-ph/0009091 and quant-ph/000903
    • 

    corecore