4,887 research outputs found

    Dynamic sharing of a multiple access channel

    Get PDF
    In this paper we consider the mutual exclusion problem on a multiple access channel. Mutual exclusion is one of the fundamental problems in distributed computing. In the classic version of this problem, n processes perform a concurrent program which occasionally triggers some of them to use shared resources, such as memory, communication channel, device, etc. The goal is to design a distributed algorithm to control entries and exits to/from the shared resource in such a way that in any time there is at most one process accessing it. We consider both the classic and a slightly weaker version of mutual exclusion, called ep-mutual-exclusion, where for each period of a process staying in the critical section the probability that there is some other process in the critical section is at most ep. We show that there are channel settings, where the classic mutual exclusion is not feasible even for randomized algorithms, while ep-mutual-exclusion is. In more relaxed channel settings, we prove an exponential gap between the makespan complexity of the classic mutual exclusion problem and its weaker ep-exclusion version. We also show how to guarantee fairness of mutual exclusion algorithms, i.e., that each process that wants to enter the critical section will eventually succeed

    Self-Stabilizing Repeated Balls-into-Bins

    Full text link
    We study the following synchronous process that we call "repeated balls-into-bins". The process is started by assigning nn balls to nn bins in an arbitrary way. In every subsequent round, from each non-empty bin one ball is chosen according to some fixed strategy (random, FIFO, etc), and re-assigned to one of the nn bins uniformly at random. We define a configuration "legitimate" if its maximum load is O(logn)\mathcal{O}(\log n). We prove that, starting from any configuration, the process will converge to a legitimate configuration in linear time and then it will only take on legitimate configurations over a period of length bounded by any polynomial in nn, with high probability (w.h.p.). This implies that the process is self-stabilizing and that every ball traverses all bins in O(nlog2n)\mathcal{O}(n \log^2 n) rounds, w.h.p

    Randomized versus Deterministic Implementations of Concurrent Data Structures

    Get PDF
    One of the key trends in computing over the past two decades has been increased distribution, both at the processor level, where multi-core architectures are now the norm, and at the system level, where many key services are currently distributed overmultiple machines. Thus, understanding the power and limitations of computing in a concurrent, distributed setting is one of the major challenges in Computer Science. In this thesis, we analyze the complexity of implementing concurrent data structures in asynchronous shared memory systems. We focus on the complexity of a classic distributed coordination task called renaming, in which a set of processes need to pick distinct names from a small set of identifiers. We present the first tight bounds for the time complexity of this problem, both for deterministic and randomized implementations, solving a long-standing open problem in the field. For deterministic algorithms, we prove a tight linear lower bound; for randomized solutions, we provide logarithmic upper and lower bounds on time complexity. Together, these results show an exponential separation between deterministic and randomized renaming solutions. Importantly, the lower bounds extend to implementations of practical shared-memory data structures, such as queues, stacks, and counters. From a technical perspective, this thesis highlights new connections between the distributed renaming problem and other fundamental objects, such as sorting networks, mutual exclusion, and counters. In particular, we show that sorting networks can be used to obtain optimal randomized solutions to renaming, and that, in turn, the existence of these solutions implies a linear lower bound on the complexity of the problem. In sum, the results in this thesis suggest that deterministic implementations of shared-memory data structures do not scale well in terms of worst-case time complexity. On the positive side, we emphasize randomization as a natural alternative, which can circumvent the deterministic lower bounds with high probability. Thus, a promising direction for future work is to extend our randomized renaming techniques to obtain efficient implementations of concurrent data structures

    Value Iteration for Long-run Average Reward in Markov Decision Processes

    Full text link
    Markov decision processes (MDPs) are standard models for probabilistic systems with non-deterministic behaviours. Long-run average rewards provide a mathematically elegant formalism for expressing long term performance. Value iteration (VI) is one of the simplest and most efficient algorithmic approaches to MDPs with other properties, such as reachability objectives. Unfortunately, a naive extension of VI does not work for MDPs with long-run average rewards, as there is no known stopping criterion. In this work our contributions are threefold. (1) We refute a conjecture related to stopping criteria for MDPs with long-run average rewards. (2) We present two practical algorithms for MDPs with long-run average rewards based on VI. First, we show that a combination of applying VI locally for each maximal end-component (MEC) and VI for reachability objectives can provide approximation guarantees. Second, extending the above approach with a simulation-guided on-demand variant of VI, we present an anytime algorithm that is able to deal with very large models. (3) Finally, we present experimental results showing that our methods significantly outperform the standard approaches on several benchmarks

    Fast Deterministic Consensus in a Noisy Environment

    Full text link
    It is well known that the consensus problem cannot be solved deterministically in an asynchronous environment, but that randomized solutions are possible. We propose a new model, called noisy scheduling, in which an adversarial schedule is perturbed randomly, and show that in this model randomness in the environment can substitute for randomness in the algorithm. In particular, we show that a simplified, deterministic version of Chandra's wait-free shared-memory consensus algorithm (PODC, 1996, pp. 166-175) solves consensus in time at most logarithmic in the number of active processes. The proof of termination is based on showing that a race between independent delayed renewal processes produces a winner quickly. In addition, we show that the protocol finishes in constant time using quantum and priority-based scheduling on a uniprocessor, suggesting that it is robust against the choice of model over a wide range.Comment: Typographical errors fixe

    Doubly infinite separation of quantum information and communication

    Get PDF
    We prove the existence of (one-way) communication tasks with a subconstant versus superconstant asymptotic gap, which we call "doubly infinite," between their quantum information and communication complexities. We do so by studying the exclusion game [C. Perry et al., Phys. Rev. Lett. 115, 030504 (2015)] for which there exist instances where the quantum information complexity tends to zero as the size of the input nn increases. By showing that the quantum communication complexity of these games scales at least logarithmically in nn, we obtain our result. We further show that the established lower bounds and gaps still hold even if we allow a small probability of error. However in this case, the nn-qubit quantum message of the zero-error strategy can be compressed polynomially.Comment: 16 pages, 2 figures. v4: minor errors fixed; close to published version; v5: financial support info adde
    corecore