30,656 research outputs found

    Low-cost sensors based multi-sensor data fusion techniques for RPAS navigation and guidance

    Get PDF
    In order for Remotely Piloted Aircraft Systems (RPAS) to coexist seamlessly with manned aircraft in non-segregated airspace, enhanced navigational capabilities are essential to meet the Required Navigational Performance (RNP) levels in all flight phases. A Multi-Sensor Data Fusion (MSDF) framework is adopted to improve the navigation capabilities of an integrated Navigation and Guidance System (NGS) designed for small-sized RPAS. The MSDF architecture includes low-cost and low weight/volume navigation sensors suitable for various classes of RPAS. The selected sensors include Global Navigation Satellite Systems (GNSS), Micro-Electro-Mechanical System (MEMS) based Inertial Measurement Unit (IMU) and Vision Based Sensors (VBS). A loosely integrated navigation architecture is presented where an Unscented Kalman Filter (UKF) is used to combine the navigation sensor measurements. The presented UKF based VBS-INS-GNSS-ADM (U-VIGA) architecture is an evolution of previous research performed on Extended Kalman Filter (EKF) based VBS-INS-GNSS (E-VIGA) systems. An Aircraft Dynamics Model (ADM) is adopted as a virtual sensor and acts as a knowledge-based module providing additional position and attitude information, which is pre-processed by an additional/local UKF. The E-VIGA and U-VIGA performances are evaluated in a small RPAS integration scheme (i.e., AEROSONDE RPAS platform) by exploring a representative cross-section of this RPAS operational flight envelope. The position and attitude accuracy comparison shows that the E-VIGA and U-VIGA systems fulfill the relevant RNP criteria, including precision approach in CAT-II. A novel Human Machine Interface (HMI) architecture is also presented, whose design takes into consideration the coordination tasks of multiple human operators. In addition, the interface scheme incorporates the human operator as an integral part of the control loop providing a higher level of situational awareness

    SkiMap: An Efficient Mapping Framework for Robot Navigation

    Full text link
    We present a novel mapping framework for robot navigation which features a multi-level querying system capable to obtain rapidly representations as diverse as a 3D voxel grid, a 2.5D height map and a 2D occupancy grid. These are inherently embedded into a memory and time efficient core data structure organized as a Tree of SkipLists. Compared to the well-known Octree representation, our approach exhibits a better time efficiency, thanks to its simple and highly parallelizable computational structure, and a similar memory footprint when mapping large workspaces. Peculiarly within the realm of mapping for robot navigation, our framework supports realtime erosion and re-integration of measurements upon reception of optimized poses from the sensor tracker, so as to improve continuously the accuracy of the map.Comment: Accepted by International Conference on Robotics and Automation (ICRA) 2017. This is the submitted version. The final published version may be slightly differen

    MISAT: Designing a Series of Powerful Small Satellites Based upon Micro Systems Technology

    Get PDF
    MISAT is a research and development cluster which will create a small satellite platform based on Micro Systems Technology (MST) aiming at innovative space as well as terrestrial applications. MISAT is part of the Dutch MicroNed program which has established a microsystems infrastructure to fully exploit the MST knowledge chain involving public and industrial partners alike. The cluster covers MST-related developments for the spacecraft bus and payload, as well as the satellite architecture. Particular emphasis is given to distributed systems in space to fully exploit the potential of miniaturization for future mission concepts. Examples of current developments are wireless sensor and actuator networks with plug and play characteristics, autonomous digital Sun sensors, re-configurable radio front ends with minimum power consumption, or micro-machined electrostatic accelerometer and gradiometer system for scientific research in fundamental physics as well as geophysics. As a result of MISAT, a first nano-satellite will be launched in 2007 to demonstrate the next generation of Sun sensors, power subsystems and satellite architecture technology. Rapid access to in-orbit technology demonstration and verification will be provided by a series of small satellites. This will include a formation flying mission, which will increasingly rely on MISAT technology to improve functionality and reduce size, mass and power for advanced technology demonstration and novel scientific applications.

    Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps

    Full text link
    Visual robot navigation within large-scale, semi-structured environments deals with various challenges such as computation intensive path planning algorithms or insufficient knowledge about traversable spaces. Moreover, many state-of-the-art navigation approaches only operate locally instead of gaining a more conceptual understanding of the planning objective. This limits the complexity of tasks a robot can accomplish and makes it harder to deal with uncertainties that are present in the context of real-time robotics applications. In this work, we present Topomap, a framework which simplifies the navigation task by providing a map to the robot which is tailored for path planning use. This novel approach transforms a sparse feature-based map from a visual Simultaneous Localization And Mapping (SLAM) system into a three-dimensional topological map. This is done in two steps. First, we extract occupancy information directly from the noisy sparse point cloud. Then, we create a set of convex free-space clusters, which are the vertices of the topological map. We show that this representation improves the efficiency of global planning, and we provide a complete derivation of our algorithm. Planning experiments on real world datasets demonstrate that we achieve similar performance as RRT* with significantly lower computation times and storage requirements. Finally, we test our algorithm on a mobile robotic platform to prove its advantages.Comment: 8 page

    Towards Visual Ego-motion Learning in Robots

    Full text link
    Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more experience. To this end, we propose a fully trainable solution to visual ego-motion estimation for varied camera optics. We propose a visual ego-motion learning architecture that maps observed optical flow vectors to an ego-motion density estimate via a Mixture Density Network (MDN). By modeling the architecture as a Conditional Variational Autoencoder (C-VAE), our model is able to provide introspective reasoning and prediction for ego-motion induced scene-flow. Additionally, our proposed model is especially amenable to bootstrapped ego-motion learning in robots where the supervision in ego-motion estimation for a particular camera sensor can be obtained from standard navigation-based sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through experiments, we show the utility of our proposed approach in enabling the concept of self-supervised learning for visual ego-motion estimation in autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures, 2 table

    An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor

    Full text link
    This paper presents a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system with loop-closing and relocalization capabilities targeted for the underwater domain. Our previous work, SVIn, augmented the state-of-the-art visual-inertial state estimation package OKVIS to accommodate acoustic data from sonar in a non-linear optimization-based framework. This paper addresses drift and loss of localization -- one of the main problems affecting other packages in underwater domain -- by providing the following main contributions: a robust initialization method to refine scale using depth measurements, a fast preprocessing step to enhance the image quality, and a real-time loop-closing and relocalization method using bag of words (BoW). An additional contribution is the addition of depth measurements from a pressure sensor to the tightly-coupled optimization formulation. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle from challenging underwater environments with poor visibility demonstrate performance never achieved before in terms of accuracy and robustness
    corecore