1,520 research outputs found

    Plane Formation by Synchronous Mobile Robots in the Three Dimensional Euclidean Space

    Full text link
    Creating a swarm of mobile computing entities frequently called robots, agents or sensor nodes, with self-organization ability is a contemporary challenge in distributed computing. Motivated by this, we investigate the plane formation problem that requires a swarm of robots moving in the three dimensional Euclidean space to land on a common plane. The robots are fully synchronous and endowed with visual perception. But they do not have identifiers, nor access to the global coordinate system, nor any means of explicit communication with each other. Though there are plenty of results on the agreement problem for robots in the two dimensional plane, for example, the point formation problem, the pattern formation problem, and so on, this is the first result for robots in the three dimensional space. This paper presents a necessary and sufficient condition for fully-synchronous robots to solve the plane formation problem that does not depend on obliviousness i.e., the availability of local memory at robots. An implication of the result is somewhat counter-intuitive: The robots cannot form a plane from most of the semi-regular polyhedra, while they can form a plane from every regular polyhedron (except a regular icosahedron), whose symmetry is usually considered to be higher than any semi-regular polyhedrdon

    Plane Formation by Synchronous Mobile Robots without Chirality

    Get PDF
    We consider a distributed system consisting of autonomous mobile computing entities called robots moving in the three-dimensional space (3D-space). The robots are anonymous, oblivious, fully-synchronous and have neither any access to the global coordinate system nor any explicit communication medium. Each robot cooperates with other robots by observing the positions of other robots in its local coordinate system. One of the most fundamental agreement problems in 3D-space is the plane formation problem that requires the robots to land on a common plane, that is not predefined. This problem is not always solvable because of the impossibility of symmetry breaking. While existing results assume that the robots agree on the handedness of their local coordinate systems, we remove the assumption and consider the robots without chirality. The robots without chirality can never break the symmetry consisting of rotation symmetry and reflection symmetry. Such symmetry in 3D-space is fully described by 17 symmetry types each of which forms a group. We extend the notion of symmetricity [Suzuki and Yamashita, SIAM J. Compt. 1999] [Yamauchi et al., PODC 2016] to cover these 17 symmetry groups. Then we give a characterization of initial configurations from which the fully-synchronous robots without chirality can form a plane in terms of symmetricity

    TuringMobile: A Turing Machine of Oblivious Mobile Robots with Limited Visibility and Its Applications

    Get PDF
    In this paper we investigate the computational power of a set of mobile robots with limited visibility. At each iteration, a robot takes a snapshot of its surroundings, uses the snapshot to compute a destination point, and it moves toward its destination. Each robot is punctiform and memoryless, it operates in R^m, it has a local reference system independent of the other robots\u27 ones, and is activated asynchronously by an adversarial scheduler. Moreover, the robots are non-rigid, in that they may be stopped by the scheduler at each move before reaching their destination (but are guaranteed to travel at least a fixed unknown distance before being stopped). We show that despite these strong limitations, it is possible to arrange 3m+3k of these weak entities in R^m to simulate the behavior of a stronger robot that is rigid (i.e., it always reaches its destination) and is endowed with k registers of persistent memory, each of which can store a real number. We call this arrangement a TuringMobile. In its simplest form, a TuringMobile consisting of only three robots can travel in the plane and store and update a single real number. We also prove that this task is impossible with fewer than three robots. Among the applications of the TuringMobile, we focused on Near-Gathering (all robots have to gather in a small-enough disk) and Pattern Formation (of which Gathering is a special case) with limited visibility. Interestingly, our investigation implies that both problems are solvable in Euclidean spaces of any dimension, even if the visibility graph of the robots is initially disconnected, provided that a small amount of these robots are arranged to form a TuringMobile. In the special case of the plane, a basic TuringMobile of only three robots is sufficient

    Pattern Formation for Fat Robots with Memory

    Full text link
    Given a set of n1n\geq 1 autonomous, anonymous, indistinguishable, silent, and possibly disoriented mobile unit disk (i.e., fat) robots operating following Look-Compute-Move cycles in the Euclidean plane, we consider the Pattern Formation problem: from arbitrary starting positions, the robots must reposition themselves to form a given target pattern. This problem arises under obstructed visibility, where a robot cannot see another robot if there is a third robot on the straight line segment between the two robots. We assume that a robot's movement cannot be interrupted by an adversary and that robots have a small O(1)O(1)-sized memory that they can use to store information, but that cannot be communicated to the other robots. To solve this problem, we present an algorithm that works in three steps. First it establishes mutual visibility, then it elects one robot to be the leader, and finally it forms the required pattern. The whole algorithm runs in O(n)+O(qlogn)O(n) + O(q \log n) rounds, where q>0q>0 is related to leader election, which takes O(qlogn)O(q \log n) rounds with probability at least 1nq1-n^{-q}. The algorithms are collision-free and do not require the knowledge of the number of robots.Comment: arXiv admin note: text overlap with arXiv:2306.1444

    Robots with Lights: Overcoming Obstructed Visibility Without Colliding

    Full text link
    Robots with lights is a model of autonomous mobile computational entities operating in the plane in Look-Compute-Move cycles: each agent has an externally visible light which can assume colors from a fixed set; the lights are persistent (i.e., the color is not erased at the end of a cycle), but otherwise the agents are oblivious. The investigation of computability in this model, initially suggested by Peleg, is under way, and several results have been recently established. In these investigations, however, an agent is assumed to be capable to see through another agent. In this paper we start the study of computing when visibility is obstructable, and investigate the most basic problem for this setting, Complete Visibility: The agents must reach within finite time a configuration where they can all see each other and terminate. We do not make any assumption on a-priori knowledge of the number of agents, on rigidity of movements nor on chirality. The local coordinate system of an agent may change at each activation. Also, by definition of lights, an agent can communicate and remember only a constant number of bits in each cycle. In spite of these weak conditions, we prove that Complete Visibility is always solvable, even in the asynchronous setting, without collisions and using a small constant number of colors. The proof is constructive. We also show how to extend our protocol for Complete Visibility so that, with the same number of colors, the agents solve the (non-uniform) Circle Formation problem with obstructed visibility

    Collisionless Pattern Discovery in Robot Swarms Using Deep Reinforcement Learning

    Full text link
    We present a deep reinforcement learning-based framework for automatically discovering patterns available in any given initial configuration of fat robot swarms. In particular, we model the problem of collision-less gathering and mutual visibility in fat robot swarms and discover patterns for solving them using our framework. We show that by shaping reward signals based on certain constraints like mutual visibility and safe proximity, the robots can discover collision-less trajectories leading to well-formed gathering and visibility patterns
    corecore