21 research outputs found

    Tight local approximation results for max-min linear programs

    Full text link
    In a bipartite max-min LP, we are given a bipartite graph \myG = (V \cup I \cup K, E), where each agent v∈Vv \in V is adjacent to exactly one constraint i∈Ii \in I and exactly one objective k∈Kk \in K. Each agent vv controls a variable xvx_v. For each i∈Ii \in I we have a nonnegative linear constraint on the variables of adjacent agents. For each k∈Kk \in K we have a nonnegative linear objective function of the variables of adjacent agents. The task is to maximise the minimum of the objective functions. We study local algorithms where each agent vv must choose xvx_v based on input within its constant-radius neighbourhood in \myG. We show that for every Ï”>0\epsilon>0 there exists a local algorithm achieving the approximation ratio ΔI(1−1/ΔK)+Ï”{\Delta_I (1 - 1/\Delta_K)} + \epsilon. We also show that this result is the best possible -- no local algorithm can achieve the approximation ratio ΔI(1−1/ΔK){\Delta_I (1 - 1/\Delta_K)}. Here ΔI\Delta_I is the maximum degree of a vertex i∈Ii \in I, and ΔK\Delta_K is the maximum degree of a vertex k∈Kk \in K. As a methodological contribution, we introduce the technique of graph unfolding for the design of local approximation algorithms.Comment: 16 page

    Visualization of Distributed Algorithms Based on Graph Relabelling Systems1 1This work has been supported by the European TMR research network GETGRATS, and by the “Conseil RĂ©gional d' Aquitane”.

    Get PDF
    AbstractIn this paper, we present a uniform approach to simulate and visualize distributed algorithms encoded by graph relabelling systems. In particular, we use the distributed applications of local relabelling rules to automatically display the execution of the whole distributed algorithm. We have developed a Java prototype tool for implementing and visualizing distributed algorithms. We illustrate the different aspects of our framework using various distributed algorithms including election and spanning trees

    Relaxed Byzantine Vector Consensus

    Get PDF
    Exact Byzantine consensus problem requires that non-faulty processes reach agreement on a decision (or output) that is in the convex hull of the inputs at the non-faulty processes. It is well-known that exact consensus is impossible in an asynchronous system in presence of faults, and in a synchronous system, n>=3f+1 is tight on the number of processes to achieve exact Byzantine consensus with scalar inputs, in presence of up to f Byzantine faulty processes. Recent work has shown that when the inputs are d-dimensional vectors of reals, n>=max(3f+1,(d+1)f+1) is tight to achieve exact Byzantine consensus in synchronous systems, and n>= (d+2)f+1 for approximate Byzantine consensus in asynchronous systems. Due to the dependence of the lower bound on vector dimension d, the number of processes necessary becomes large when the vector dimension is large. With the hope of reducing the lower bound on n, we consider two relaxed versions of Byzantine vector consensus: k-Relaxed Byzantine vector consensus and (delta,p)-Relaxed Byzantine vector consensus. In k-relaxed consensus, the validity condition requires that the output must be in the convex hull of projection of the inputs onto any subset of k-dimensions of the vectors. For (delta,p)-consensus the validity condition requires that the output must be within distance delta of the convex hull of the inputs of the non-faulty processes, where L_p norm is used as the distance metric. For (delta,p)-consensus, we consider two versions: in one version, delta is a constant, and in the second version, delta is a function of the inputs themselves. We show that for k-relaxed consensus and (delta,p)-consensus with constant delta>=0, the bound on n is identical to the bound stated above for the original vector consensus problem. On the other hand, when delta depends on the inputs, we show that the bound on n is smaller when d>=3

    Distributed Computing with Adaptive Heuristics

    Full text link
    We use ideas from distributed computing to study dynamic environments in which computational nodes, or decision makers, follow adaptive heuristics (Hart 2005), i.e., simple and unsophisticated rules of behavior, e.g., repeatedly "best replying" to others' actions, and minimizing "regret", that have been extensively studied in game theory and economics. We explore when convergence of such simple dynamics to an equilibrium is guaranteed in asynchronous computational environments, where nodes can act at any time. Our research agenda, distributed computing with adaptive heuristics, lies on the borderline of computer science (including distributed computing and learning) and game theory (including game dynamics and adaptive heuristics). We exhibit a general non-termination result for a broad class of heuristics with bounded recall---that is, simple rules of behavior that depend only on recent history of interaction between nodes. We consider implications of our result across a wide variety of interesting and timely applications: game theory, circuit design, social networks, routing and congestion control. We also study the computational and communication complexity of asynchronous dynamics and present some basic observations regarding the effects of asynchrony on no-regret dynamics. We believe that our work opens a new avenue for research in both distributed computing and game theory.Comment: 36 pages, four figures. Expands both technical results and discussion of v1. Revised version will appear in the proceedings of Innovations in Computer Science 201

    Communication Algorithms with Advice

    Get PDF
    We study the amount of knowledge about a communication network that must be given to its nodes in order to efficiently disseminate information. Our approach is quantitative: we investigate the minimum total number of bits of information (minimum size of advice) that has to be available to nodes, regardless of the type of information provided. We compare the size of advice needed to perform broadcast and wakeup (the latter is a broadcast in which nodes can transmit only after getting the source information), both using a linear number of messages (which is optimal). We show that the minimum size of advice permitting the wakeup with a linear number of messages in a n-node network, is Θ(nlog n), while the broadcast with a linear number of messages can be achieved with advice of size O(n). We also show that the latter size of advice is almost optimal: no advice of size o(n) can permit to broadcast with a linear number of messages. Thus a

    Asynchronous stigmergic sorting of binary matrix patterns: applications of classical distributed computing ideas

    Get PDF
    Multi-agent stigmergy forms the basis of explanatory theories for various self-organized biological phenomena, and also serves as an implementation strategy for several important artificial applications. While a number of sophisticated techniques have been used in the modeling and analysis of stigmergic processes, none of them, yet, seem to be drawn from the toolbox of classical distributed computing. Our goals are to investigate and lay the groundwork for the use of classical distributed computing ideas in reasoning about stigmergic computation. Specifically, we investigate case studies that are drawn from the domain of binary matrix pattern sorting. Here, a `swarm\u27 of memory-less and non-communicating agents follow a set of local stigmergic rules to asynchronously sort the binary states of cells in a 2-D grid so as to satisfy some global pattern specification. This domain is attractive as a test-bed because it serves as an abstraction for instances of biological pattern sorting and also because of its stigmergic expressiveness. We demonstrate the application of the following four distributed computing concepts: (1) execution serializability, (2) local checking, (3) variant functions, and (4) indistinguishability (the last as an impossibility proof technique) in the modeling and analysis of our case studies. Based on our preliminary experience with this particular domain, it seems to us that classical distributed computing techniques could be applied further in reasoning about stigmergic systems, perhaps leading to the formulation of a generalized stigmergic computational paradigm based on the principles of distributed computing

    Multi-object cooperation in distributed object bases

    Get PDF
    It is an emerging trend to build large information systems in a component-based fashion where the components follow the concept of object. Applications are constructed by organizing pre-built objects such that they cooperate with each other to perform some task. However, considerable programming effort is required to express multi-object constraints in terms of the traditional message-passing mechanism. This observation lead many authors to suggest communication abstractions in object models. One promising approach is to separate multi-object constraints from the objects and collect them into a separate construct. We call this construct an alliance. Unlike other approaches we allow alliances to involve large sets of long-lived objects which may dynamically vary during the - also potentially long - life-time of the alliance. Alliances are not only visible at the specification level but are also computational entities which enforce multi-object constraints at run-time. They do so in an unreliable world, i.e., we do not assume that objects will always meet their obligations in a cooperation. Since objects may often be distributed across a network, we demonstrate that alliances are an ideal place to deal with aspects of distribution in an application-specific manner. We illustrate our thesis by one of the key questions of distributed object management: where shall objects be located and when shall they migrate to which node? We show that alliances allow for customized distribution policies which are neither "hardwired" into the objects nor necessitate a centralized distribution control
    corecore