2,148 research outputs found

    Rendezvous of Heterogeneous Mobile Agents in Edge-weighted Networks

    Get PDF
    We introduce a variant of the deterministic rendezvous problem for a pair of heterogeneous agents operating in an undirected graph, which differ in the time they require to traverse particular edges of the graph. Each agent knows the complete topology of the graph and the initial positions of both agents. The agent also knows its own traversal times for all of the edges of the graph, but is unaware of the corresponding traversal times for the other agent. The goal of the agents is to meet on an edge or a node of the graph. In this scenario, we study the time required by the agents to meet, compared to the meeting time TOPTT_{OPT} in the offline scenario in which the agents have complete knowledge about each others speed characteristics. When no additional assumptions are made, we show that rendezvous in our model can be achieved after time O(nTOPT)O(n T_{OPT}) in a nn-node graph, and that such time is essentially in some cases the best possible. However, we prove that the rendezvous time can be reduced to Θ(TOPT)\Theta (T_{OPT}) when the agents are allowed to exchange Θ(n)\Theta(n) bits of information at the start of the rendezvous process. We then show that under some natural assumption about the traversal times of edges, the hardness of the heterogeneous rendezvous problem can be substantially decreased, both in terms of time required for rendezvous without communication, and the communication complexity of achieving rendezvous in time Θ(TOPT)\Theta (T_{OPT})

    Circle formation by asynchronous opaque robots on infinite grid

    Get PDF
    This paper presents a distributed algorithm for circle formation problem under the infinite grid environment by asynchronous mobile opaque robots. Initially all the robots are acquiring distinct positions and they have to form a circle over the grid. Movements of the robots are restricted only along the grid lines. They do not share any global co-ordinate system. Robots are controlled by an asynchronous adversarial scheduler that operates in Look-Compute-Move cycles. The robots are indistinguishable by their nature, do not have any memory of their past configurations and previous actions. We consider the problem under luminous model, where robots communicate via lights, other than that they do not have any external communication system. Our protocol solves the  circle formation problem using seven colors. A subroutine of our algorithm also solves the line formation problem using three colors

    Collision-Free Pattern Formation

    Get PDF
    Shoals of small fishes can change their collective shape and form a specific pattern. They do so efficiently (in parallel) and without collision. In this paper, we study the analog problem of distributed pattern formation. A set of processes needs to move from a set of initial positions to a set of final positions. The processes are oblivious (no internal memory) and must preserve, at any time, a minimal distance between them. A naive solution would be to move the processes one by one, but this would take too long. The difficulty here is to move the processes simultaneously in clearly delimited phases, no matter how unfavorable the initial configuration may be. We solve this by treating the problem "dimension by dimension": the processes first form 1D trails, then gather into a 2D shape (this technique can be generalized to higher dimensions). We present an optimal algorithm which time complexity depends linearly on the radius of the smallest circle containing both initial and final positions. The algorithm is self-stabilizing, as the processes are oblivious and the initial positions are arbitrary

    Distributed reinforcement learning for self-reconfiguring modular robots

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 101-106).In this thesis, we study distributed reinforcement learning in the context of automating the design of decentralized control for groups of cooperating, coupled robots. Specifically, we develop a framework and algorithms for automatically generating distributed controllers for self-reconfiguring modular robots using reinforcement learning. The promise of self-reconfiguring modular robots is that of robustness, adaptability and versatility. Yet most state-of-the-art distributed controllers are laboriously handcrafted and task-specific, due to the inherent complexities of distributed, local-only control. In this thesis, we propose and develop a framework for using reinforcement learning for automatic generation of such controllers. The approach is profitable because reinforcement learning methods search for good behaviors during the lifetime of the learning agent, and are therefore applicable to online adaptation as well as automatic controller design. However, we must overcome the challenges due to the fundamental partial observability inherent in a distributed system such as a self reconfiguring modular robot. We use a family of policy search methods that we adapt to our distributed problem. The outcome of a local search is always influenced by the search space dimensionality, its starting point, and the amount and quality of available exploration through experience.(cont) We undertake a systematic study of the effects that certain robot and task parameters, such as the number of modules, presence of exploration constraints, availability of nearest-neighbor communications, and partial behavioral knowledge from previous experience, have on the speed and reliability of learning through policy search in self-reconfiguring modular robots. In the process, we develop novel algorithmic variations and compact search space representations for learning in our domain, which we test experimentally on a number of tasks. This thesis is an empirical study of reinforcement learning in a simulated lattice based self-reconfiguring modular robot domain. However, our results contribute to the broader understanding of automatic generation of group control and design of distributed reinforcement learning algorithms.by Paulina Varshavskaya.Ph.D
    corecore