156 research outputs found

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    Planetary rovers and data fusion

    Get PDF
    This research will investigate the problem of position estimation for planetary rovers. Diverse algorithmic filters are available for collecting input data and transforming that data to useful information for the purpose of position estimation process. The terrain has sandy soil which might cause slipping of the robot, and small stones and pebbles which can affect trajectory. The Kalman Filter, a state estimation algorithm was used for fusing the sensor data to improve the position measurement of the rover. For the rover application the locomotion and errors accumulated by the rover is compensated by the Kalman Filter. The movement of a rover in a rough terrain is challenging especially with limited sensors to tackle the problem. Thus, an initiative was taken to test drive the rover during the field trial and expose the mobile platform to hard ground and soft ground(sand). It was found that the LSV system produced speckle image and values which proved invaluable for further research and for the implementation of data fusion. During the field trial,It was also discovered that in a at hard surface the problem of the steering rover is minimal. However, when the rover was under the influence of soft sand the rover tended to drift away and struggled to navigate. This research introduced the laser speckle velocimetry as an alternative for odometric measurement. LSV data was gathered during the field trial to further simulate under MATLAB, which is a computational/mathematical programming software used for the simulation of the rover trajectory. The wheel encoders came with associated errors during the position measurement process. This was observed during the earlier field trials too. It was also discovered that the Laser Speckle Velocimetry measurement was able to measure accurately the position measurement but at the same time sensitivity of the optics produced noise which needed to be addressed as error problem. Though the rough terrain is found in Mars, this paper is applicable to a terrestrial robot on Earth. There are regions in Earth which have rough terrains and regions which are hard to measure with encoders. This is especially true concerning icy places like Antarctica, Greenland and others. The proposed implementation for the development of the locomotion system is to model a system for the position estimation through the use of simulation and collecting data using the LSV. Two simulations are performed, one is the differential drive of a two wheel robot and the second involves the fusion of the differential drive robot data and the LSV data collected from the rover testbed. The results have been positive. The expected contributions from the research work includes a design of a LSV system to aid the locomotion measurement system. Simulation results show the effect of different sensors and velocity of the robot. The kalman filter improves the position estimation process

    GICI-LIB: A GNSS/INS/Camera Integrated Navigation Library

    Full text link
    Accurate navigation is essential for autonomous robots and vehicles. In recent years, the integration of the Global Navigation Satellite System (GNSS), Inertial Navigation System (INS), and camera has garnered considerable attention due to its robustness and high accuracy in diverse environments. In such systems, fully utilizing the role of GNSS is cumbersome because of the diverse choices of formulations, error models, satellite constellations, signal frequencies, and service types, which lead to different precision, robustness, and usage dependencies. To clarify the capacity of GNSS algorithms and accelerate the development efficiency of employing GNSS in multi-sensor fusion algorithms, we open source the GNSS/INS/Camera Integration Library (GICI-LIB), together with detailed documentation and a comprehensive land vehicle dataset. A factor graph optimization-based multi-sensor fusion framework is established, which combines almost all GNSS measurement error sources by fully considering temporal and spatial correlations between measurements. The graph structure is designed for flexibility, making it easy to form any kind of integration algorithm. For illustration, four Real-Time Kinematic (RTK)-based algorithms from GICI-LIB are evaluated using our dataset. Results confirm the potential of the GICI system to provide continuous precise navigation solutions in a wide spectrum of urban environments.Comment: Open-source: https://github.com/chichengcn/gici-open. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Social Odometry: Imitation Based Odometry in Collective Robotics

    Full text link
    The improvement of odometry systems in collective robotics remains an important challenge for several applications. Social odometry is an online social dynamics which confers the robots the possibility to learn from the others. Robots neither share any movement constraint nor access to centralized information. Each robot has an estimate of its own location and an associated confidence level that decreases with distance traveled. Social odometry guides a robot to its goal by imitating estimated locations, confidence levels and actual locations of its neighbors. This simple online social form of odometry is shown to produce a self-organized collective pattern which allows a group of robots to both increase the quality of individuals’ estimates and efficiently improve their collective performanc

    Fast, Robust, Accurate, Multi-Body Motion Aware SLAM

    Get PDF
    Simultaneous ego localization and surrounding object motion awareness are significant issues for the navigation capability of unmanned systems and virtual-real interaction applications. Robust and accurate data association at object and feature levels is one of the key factors in solving this problem. However, currently available solutions ignore the complementarity among different cues in the front-end object association and the negative effects of poorly tracked features on the back-end optimization. It makes them not robust enough in practical applications. Motivated by these observations, we make up rigid environment as a unified whole to assist state decoupling by integrating high-level semantic information, ultimately enabling simultaneous multi-states estimation. A filter-based multi-cues fusion object tracker is proposed for establishing more stable object-level data association. Combined with the object’s motion priors, the motion-aided feature tracking algorithm is proposed to improve the feature-level data association performance. Furthermore, a novel state estimation factor graph is designed which integrates a specific feature observation uncertainty model and the intrinsic priors of tracked object, and solved through sliding-window optimization. Our system is evaluated using the KITTI dataset and achieves comparable performance to state-of-the-art object pose estimation systems both quantitatively and qualitatively. We have also validated our system on simulation environment and a real-world dataset to confirm the potential application value in different practical scenarios

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
    corecore