666 research outputs found

    An Implementation Approach and Performance Analysis of Image Sensor Based Multilateral Indoor Localization and Navigation System

    Full text link
    Optical camera communication (OCC) exhibits considerable importance nowadays in various indoor camera based services such as smart home and robot-based automation. An android smart phone camera that is mounted on a mobile robot (MR) offers a uniform communication distance when the camera remains at the same level that can reduce the communication error rate. Indoor mobile robot navigation (MRN) is considered to be a promising OCC application in which the white light emitting diodes (LEDs) and an MR camera are used as transmitters and receiver respectively. Positioning is a key issue in MRN systems in terms of accuracy, data rate, and distance. We propose an indoor navigation and positioning combined algorithm and further evaluate its performance. An android application is developed to support data acquisition from multiple simultaneous transmitter links. Experimentally, we received data from four links which are required to ensure a higher positioning accuracy

    A Low-Cost Experimental Testbed for Multi-Agent System Coordination Control

    Get PDF
    A multi-agent system can be defined as a coordinated network of mobile, physical agents that execute complex tasks beyond their individual capabilities. Observations of biological multi-agent systems in nature reveal that these ``super-organisms” accomplish large scale tasks by leveraging the inherent advantages of a coordinated group. With this in mind, such systems have the potential to positively impact a wide variety of engineering applications (e.g. surveillance, self-driving cars, and mobile sensor networks). The current state of research in the area of multi-agent systems is quickly evolving from the theoretical development of coordination control algorithms and their computer simulations to experimental validations on proof-of-concept testbeds using small-scale mobile robotic platforms. An in-house testbed would allow for rapid prototyping and validation of control algorithms, and potentially lead to new research directions spawned by experimentally-observed issues. To this end, a custom experimental testbed, TIGER Square, has been designed, developed, built, and tested at Louisiana State University. In this work, the completed design and test results for a centralized testbed is presented. That is, the individual robots follow an overarching control entity and are reliant on a global structure, such as a central processing computer. As part of the validation process, a series of formation control experiments were executed to assess the performance of the testbed. In order to eliminate single-point failures, a multi-agent system must be fully decentralized or distributed. This means that the responsibilities of processing, localization, and communication are distributed to each agent. Therefore, this work concludes with the introduction of a prototype localization module that will be integrated into the existing centralized testbed. This initial step allows for the future decentralization of TIGER Square and opens the path to achieve a fully capable multi-agent system testbed

    Distance-Based Formation Control using Decentralized Sensing with Infrared Photodiodes

    Get PDF
    This study presents an onboard sensor system for determining the relative positions of mobile robots, which is used in decentralized distance-based formation controllers for multi-agent systems. This sensor system uses infrared photodiodes and LEDs; its effective use requires coordination between the emitting and detecting robots. A technique is introduced for calculating the relative positions based on photodiode readings, and an automated calibration system is designed for future maintenance. By measuring the relative positions of their neighbors, each robot is capable of running an onboard formation controller, which is independent of both a centralized controller and a global positioning-like system (e.g., GPS-denied environments). This independence is referred to as decentralization. This was demonstrated with three different formation acquisition experiments, which were compared to equivalent experiments using a global positioning system and a centralized control station designed in prior studies. The centralized and decentralized experiments resulted in similar formation outcomes, but the steady-state error for the decentralized system increased. This result was an expected consequence of the uncertainty in decentralized localization measurements

    PHALANX: Expendable Projectile Sensor Networks for Planetary Exploration

    Get PDF
    Technologies enabling long-term, wide-ranging measurement in hard-to-reach areas are a critical need for planetary science inquiry. Phenomena of interest include flows or variations in volatiles, gas composition or concentration, particulate density, or even simply temperature. Improved measurement of these processes enables understanding of exotic geologies and distributions or correlating indicators of trapped water or biological activity. However, such data is often needed in unsafe areas such as caves, lava tubes, or steep ravines not easily reached by current spacecraft and planetary robots. To address this capability gap, we have developed miniaturized, expendable sensors which can be ballistically lobbed from a robotic rover or static lander - or even dropped during a flyover. These projectiles can perform sensing during flight and after anchoring to terrain features. By augmenting exploration systems with these sensors, we can extend situational awareness, perform long-duration monitoring, and reduce utilization of primary mobility resources, all of which are crucial in surface missions. We call the integrated payload that includes a cold gas launcher, smart projectiles, planning software, network discovery, and science sensing: PHALANX. In this paper, we introduce the mission architecture for PHALANX and describe an exploration concept that pairs projectile sensors with a rover mothership. Science use cases explored include reconnaissance using ballistic cameras, volatiles detection, and building timelapse maps of temperature and illumination conditions. Strategies to autonomously coordinate constellations of deployed sensors to self-discover and localize with peer ranging (i.e. a local GPS) are summarized, thus providing communications infrastructure beyond-line-of-sight (BLOS) of the rover. Capabilities were demonstrated through both simulation and physical testing with a terrestrial prototype. The approach to developing a terrestrial prototype is discussed, including design of the launching mechanism, projectile optimization, micro-electronics fabrication, and sensor selection. Results from early testing and characterization of commercial-off-the-shelf (COTS) components are reported. Nodes were subjected to successful burn-in tests over 48 hours at full logging duty cycle. Integrated field tests were conducted in the Roverscape, a half-acre planetary analog environment at NASA Ames, where we tested up to 10 sensor nodes simultaneously coordinating with an exploration rover. Ranging accuracy has been demonstrated to be within +/-10cm over 20m using commodity radios when compared to high-resolution laser scanner ground truthing. Evolution of the design, including progressive miniaturization of the electronics and iterated modifications of the enclosure housing for streamlining and optimized radio performance are described. Finally, lessons learned to date, gaps toward eventual flight mission implementation, and continuing future development plans are discussed

    Do-It-Yourself Single Camera 3D Pointer Input Device

    Full text link
    We present a new algorithm for single camera 3D reconstruction, or 3D input for human-computer interfaces, based on precise tracking of an elongated object, such as a pen, having a pattern of colored bands. To configure the system, the user provides no more than one labelled image of a handmade pointer, measurements of its colored bands, and the camera's pinhole projection matrix. Other systems are of much higher cost and complexity, requiring combinations of multiple cameras, stereocameras, and pointers with sensors and lights. Instead of relying on information from multiple devices, we examine our single view more closely, integrating geometric and appearance constraints to robustly track the pointer in the presence of occlusion and distractor objects. By probing objects of known geometry with the pointer, we demonstrate acceptable accuracy of 3D localization.Comment: 8 pages, 6 figures, 2018 15th Conference on Computer and Robot Visio

    Robot Localization for FIRST Robotics

    Get PDF
    The goal of this project was to develop a camera-based system that can determine coordinates of multiple robots during FIRST Robotics Competition game play and transmit this information to the robots. The intent of the system is to introduce an interesting new dynamic to the competition. To accomplish this, robots are fitted with custom matrix LED beacons. Six cameras capture images of the field while an FPGA embedded system at each camera performs image processing to identify the beacons. This information is then sent to a central PC which combines the six images to reconstruct the robots’ coordinates. This effort included implementation of location algorithms, imaging simulation, design of the FPGA processor and algorithms, beacon system and custom hardware for prototype deployment

    Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups

    Get PDF
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper
    corecore