27 research outputs found

    Heterogeneous Teams of Modular Robots for Mapping and Exploration

    Get PDF
    The definitive article is published in Autonomous Robots. It is available at http://www.springerlink.com (DOI: DOI: 10.1023/A:1008933826411). © Springer-VerlagIn this article, we present the design of a team of heterogeneous, centimeter-scale robots that collaborate to map and explore unknown environments. The robots, called Millibots, are configured from modular components that include sonar and IR sensors, camera, communication, computation, and mobility modules. Robots with different configurations use their special capabilities collaboratively to accomplish a given task. For mapping and exploration with multiple robots, it is critical to know the relative positions of each robot with respect to the others. We have developed a novel localization system that uses sonar-based distance measurements to determine the positions of all the robots in the group. With their positions known, we use an occupancy grid Bayesian mapping algorithm to combine the sensor data from multiple robots with different sensing modalities. Finally, we present the results of several mapping experiments conducted by a user-guided team of five robots operating in a room containing multiple obstacles

    Millibots: The Development of a Framework and Algorithms for a Distributed Heterogeneous Robot Team

    Get PDF
    The definitive article was published in IEEE Robotics and Automation Magazine, Volume 9, Issue 4, located at http://ieeexplore.ieee.org/ (DOI: 10.1109/MRA.2002.1160069) © Institute of Electrical and Electronics Engineers (IEEE)

    Fault Tolerant Localization for Teams of Distributed Robots

    Get PDF
    Presented at the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, HI, October 29 - November 3. The definitive paper is located at http://ieeexplore.ieee.org (DOI: 10.1109/IROS.2001.976309). © IEEE.To combine sensor information from distributed robot teams, it is critical to know the locations of all the robots relative to each other. This paper presents a novel fault tolerant localization algorithm developed for centimeter-scale robots, called Millibots. To determine their locations, the Millibots measure the distances between themselves with an ultrasonic distance sensor. They then combine these distance measurements with dead reckoning in a maximum likelihood estimator. The focus of this paper is on detecting and isolating measurement faults that commonly occur in this localization system. Such failures include dead reckoning errors when the robots collide with undetected obstacles, and distance measurement errors due to destructive interference between direct and multi-path ultrasound wave fronts. Simulations show that the fault tolerance algorithm accurately detects erroneous measurements and significantly improves the reliability and accuracy of the localization system

    Optimal sensor placement for cooperative distributed vision

    No full text
    Abstract — This paper describes a method for observing maneuvering targets using a group of mobile robots equipped with video cameras. These robots are part of a team of small-size (7x7x7 cm) robots configured from modular components that collaborate to accomplish a given task. The cameras seek to observe the target while facing it as much as possible from their respective viewpoints. This work considers the problem of scheduling and maneuvering the cameras based on the evaluation of their current positions in terms of how well can they maintain a frontal view of the target. We describe our approach, which distributes the task among several robots and avoids extensive energy consumption on a single robot. We explore the concept in simulation and present results. Keywords-sensor placement; cooperative sensors; distributed vision; automatic surveillance. I

    Robot navigation from human demonstration: learning control behaviors with environment feature maps

    Full text link
    When working alongside human collaborators in dynamic and unstructured environments, such as disaster recovery or military operation, fast field adaptation is necessary for an unmanned ground vehicle (UGV) to perform its duties or learn novel tasks. In these scenarios, personnel and equipment are constrained, making training with minimal human supervision a desirable learning attribute. We address the problem of making UGVs more reliable and adaptable teammates with a novel framework that uses visual perception and inverse optimal control to learn traversal costs for environment features. Through extensive evaluation in a real-world environment, we show that our framework requires few human demonstrated trajectory exemplars to learn feature costs that reliably encode several different traversal behaviors. Additionally, we present an on-line version of the framework that allows a human teammate to intervene during live operation to correct deteriorated behavior or to adapt behavior to dynamic changes in complex and unstructured environments

    Predictive Mover Detection and Tracking in Cluttered Environments

    No full text
    This paper describes the design and experimental evaluation of a system that enables a vehicle to detect and track moving objects in real-time. The approach investigated in this work detects objects in LADAR scan lines and tracks these objects (people or vehicles) over time. The system can fuse data from multiple scanners for 360° coverage. The resulting tracks are then used to predict the most likely future trajectories of the detected objects. The predictions are intended to be used by a planner for dynamic object avoidance. The perceptual capabilities of our system form the basis for safe and robust navigation in robotic vehicles, necessary to safeguard soldiers and civilians operating in the vicinity of the robot
    corecore