5,484 research outputs found

    A sweep algorithm for massively parallel simulation of circuit-switched networks

    Get PDF
    A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude

    Boosting the Basic Counting on Distributed Streams

    Get PDF
    We revisit the classic basic counting problem in the distributed streaming model that was studied by Gibbons and Tirthapura (GT). In the solution for maintaining an (ϵ,δ)(\epsilon,\delta)-estimate, as what GT's method does, we make the following new contributions: (1) For a bit stream of size nn, where each bit has a probability at least γ\gamma to be 1, we exponentially reduced the average total processing time from GT's Θ(nlog(1/δ))\Theta(n \log(1/\delta)) to O((1/(γϵ2))(log2n)log(1/δ))O((1/(\gamma\epsilon^2))(\log^2 n) \log(1/\delta)), thus providing the first sublinear-time streaming algorithm for this problem. (2) In addition to an overall much faster processing speed, our method provides a new tradeoff that a lower accuracy demand (a larger value for ϵ\epsilon) promises a faster processing speed, whereas GT's processing speed is Θ(nlog(1/δ))\Theta(n \log(1/\delta)) in any case and for any ϵ\epsilon. (3) The worst-case total time cost of our method matches GT's Θ(nlog(1/δ))\Theta(n\log(1/\delta)), which is necessary but rarely occurs in our method. (4) The space usage overhead in our method is a lower order term compared with GT's space usage and occurs only O(logn)O(\log n) times during the stream processing and is too negligible to be detected by the operating system in practice. We further validate these solid theoretical results with experiments on both real-world and synthetic data, showing that our method is faster than GT's by a factor of several to several thousands depending on the stream size and accuracy demands, without any detectable space usage overhead. Our method is based on a faster sampling technique that we design for boosting GT's method and we believe this technique can be of other interest.Comment: 32 page
    corecore