5 research outputs found

    Robust and Listening-Efficient Contention Resolution

    Full text link
    This paper shows how to achieve contention resolution on a shared communication channel using only a small number of channel accesses -- both for listening and sending -- and the resulting algorithm is resistant to adversarial noise. The shared channel operates over a sequence of synchronized time slots, and in any slot agents may attempt to broadcast a packet. An agent's broadcast succeeds if no other agent broadcasts during that slot. If two or more agents broadcast in the same slot, then the broadcasts collide and both broadcasts fail. An agent listening on the channel during a slot receives ternary feedback, learning whether that slot had silence, a successful broadcast, or a collision. Agents are (adversarially) injected into the system over time. The goal is to coordinate the agents so that each is able to successfully broadcast its packet. A contention-resolution protocol is measured both in terms of its throughput and the number of slots during which an agent broadcasts or listens. Most prior work assumes that listening is free and only tries to minimize the number of broadcasts. This paper answers two foundational questions. First, is constant throughput achievable when using polylogarithmic channel accesses per agent, both for listening and broadcasting? Second, is constant throughput still achievable when an adversary jams some slots by broadcasting noise in them? Specifically, for NN packets arriving over time and JJ jammed slots, we give an algorithm that with high probability in N+JN+J guarantees Θ(1)\Theta(1) throughput and achieves on average O(polylog(N+J))O(\texttt{polylog}(N+J)) channel accesses against an adaptive adversary. We also have per-agent high-probability guarantees on the number of channel accesses -- either O(polylog(N+J))O(\texttt{polylog}(N+J)) or O((J+1)polylog(N))O((J+1) \texttt{polylog}(N)), depending on how quickly the adversary can react to what is being broadcast

    Computer science I like proceedings of miniconference on 4.11.2011

    Get PDF

    Contention resolution with bounded delay

    Get PDF
    When many distributed processes contend for a single shared resource that can service at most one process per time slot, the key problem is devising a good distributed protocol for contention resolution. This has been studied in the context of multiple-access channels (e.g. ALOHA, Ethernet), and recently for PRAM emulation and routing in optical computers. Under a stochastic model of continuous request generation from a set of n synchronous processes, Raghavan and Upfal have recently shown a protocol which is stable if the request rate is at most λ0 for some fixed λ0 < 1; their main result is that for any given resource request, its expected delay (expected time to get serviced) is O(log n). Assuming further that the initial clock times of the processes are within a known bound B of each other, we present a stable protocol, again for some fixed positive request rate λ1,0< λ1 < 1, wherein the expected delay for each request is O(1), independent of n. We derive this by showing an analogous result for an infinite number of processes, assuming that all processes agree on the time; this is the first such result. We also present tail bounds which show that for every given resource request, it is unlikely to remain unserviced for much longer than expected, and extend our results to other classes of input distributions

    Contention resolution with bounded delay

    No full text
    When distributed processes contend for a shared resource, we need a good distributed contention resolution protocol, e.g., for multiple-access channels (ALOHA, Ethernet), PRAM emulation, and optical routing. Under a stochastic model of request generation from n synchronous processes, Raghavan & Upfal (1995) have shown a protocol which is stable for a positive request rate; their main result is that for every resource request, its expected delay (time to get serviced) is O(log n). Assuming that the initial clock times of the processes are within a known bound of each other, we present a stable protocol, wherein the expected delay for each request is O(1). We derive this by showing an analogous result for can infinite number of processes, assuming that all processes agree on the time
    corecore