3,078 research outputs found

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Progress toward multi‐robot reconnaissance and the MAGIC 2010 competition

    Full text link
    Tasks like search‐and‐rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges, including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human‐robot interfaces. This paper describes our 14‐robot team, which won the MAGIC 2010 competition. It was designed to perform urban reconnaissance missions. In the paper, we describe a variety of autonomous systems that require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, which is essential for autonomous planning and for giving humans situational awareness, required the development of fast loop‐closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. We will describe technical contributions throughout our system that played a significant role in its performance. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain. © 2012 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/93532/1/21426_ftp.pd

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    FLAT2D: Fast localization from approximate transformation into 2D

    Get PDF
    Many autonomous vehicles require precise localization into a prior map in order to support planning and to leverage semantic information within those maps (e.g. that the right lane is a turn-only lane.) A popular approach in automotive systems is to use infrared intensity maps of the ground surface to localize, making them susceptible to failures when the surface is obscured by snow or when the road is repainted. An emerging alternative is to localize based on the 3D structure around the vehicle; these methods are robust to these types of changes, but the maps are costly both in terms of storage and the computational cost of matching. In this paper, we propose a fast method for localizing based on 3D structure around the vehicle using a 2D representation. This representation retains many of the advantages of "full" matching in 3D, but comes with dramatically lower space and computational requirements. We also introduce a variation of Graph-SLAM tailored to support localization, allowing us to make use of graph-based error-recovery techniques in our localization estimate. Finally, we present real-world localization results for both an indoor mobile robotic platform and an autonomous golf cart, demonstrating that autonomous vehicles do not need full 3D matching to accurately localize in the environment

    Cooperative localization for mobile agents: a recursive decentralized algorithm based on Kalman filter decoupling

    Full text link
    We consider cooperative localization technique for mobile agents with communication and computation capabilities. We start by provide and overview of different decentralization strategies in the literature, with special focus on how these algorithms maintain an account of intrinsic correlations between state estimate of team members. Then, we present a novel decentralized cooperative localization algorithm that is a decentralized implementation of a centralized Extended Kalman Filter for cooperative localization. In this algorithm, instead of propagating cross-covariance terms, each agent propagates new intermediate local variables that can be used in an update stage to create the required propagated cross-covariance terms. Whenever there is a relative measurement in the network, the algorithm declares the agent making this measurement as the interim master. By acquiring information from the interim landmark, the agent the relative measurement is taken from, the interim master can calculate and broadcast a set of intermediate variables which each robot can then use to update its estimates to match that of a centralized Extended Kalman Filter for cooperative localization. Once an update is done, no further communication is needed until the next relative measurement

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground
    • …
    corecore