18 research outputs found

    GPS-aided Visual Wheel Odometry

    Full text link
    This paper introduces a novel GPS-aided visual-wheel odometry (GPS-VWO) for ground robots. The state estimation algorithm tightly fuses visual, wheeled encoder and GPS measurements in the way of Multi-State Constraint Kalman Filter (MSCKF). To avoid accumulating calibration errors over time, the proposed algorithm calculates the extrinsic rotation parameter between the GPS global coordinate frame and the VWO reference frame online as part of the estimation process. The convergence of this extrinsic parameter is guaranteed by the observability analysis and verified by using real-world visual and wheel encoder measurements as well as simulated GPS measurements. Moreover, a novel theoretical finding is presented that the variance of unobservable state could converge to zero for specific Kalman filter system. We evaluate the proposed system extensively in large-scale urban driving scenarios. The results demonstrate that better accuracy than GPS is achieved through the fusion of GPS and VWO. The comparison between extrinsic parameter calibration and non-calibration shows significant improvement in localization accuracy thanks to the online calibration.Comment: Accepted by IEEE ITSC 202

    PIEKF-VIWO: Visual-Inertial-Wheel Odometry using Partial Invariant Extended Kalman Filter

    Full text link
    Invariant Extended Kalman Filter (IEKF) has been successfully applied in Visual-inertial Odometry (VIO) as an advanced achievement of Kalman filter, showing great potential in sensor fusion. In this paper, we propose partial IEKF (PIEKF), which only incorporates rotation-velocity state into the Lie group structure and apply it for Visual-Inertial-Wheel Odometry (VIWO) to improve positioning accuracy and consistency. Specifically, we derive the rotation-velocity measurement model, which combines wheel measurements with kinematic constraints. The model circumvents the wheel odometer's 3D integration and covariance propagation, which is essential for filter consistency. And a plane constraint is also introduced to enhance the position accuracy. A dynamic outlier detection method is adopted, leveraging the velocity state output. Through the simulation and real-world test, we validate the effectiveness of our approach, which outperforms the standard Multi-State Constraint Kalman Filter (MSCKF) based VIWO in consistency and accuracy

    Precise and Robust Visual SLAM with Inertial Sensors and Deep Learning.

    Get PDF
    Dotar a los robots con el sentido de la percepción destaca como el componente más importante para conseguir máquinas completamente autónomas. Una vez que las máquinas sean capaces de percibir el mundo, podrán interactuar con él. A este respecto, la localización y la reconstrucción de mapas de manera simultánea, SLAM (por sus siglas en inglés) comprende todas las técnicas que permiten a los robots estimar su posición y reconstruir el mapa de su entorno al mismo tiempo, usando únicamente el conjunto de sensores a bordo. El SLAM constituye el elemento clave para la percepción de las máquinas, estando ya presente en diferentes tecnologías y aplicaciones como la conducción autónoma, la realidad virtual y aumentada o los robots de servicio. Incrementar la robustez del SLAM expandiría su uso y aplicación, haciendo las máquinas más seguras y requiriendo una menor intervención humana.En esta tesis hemos combinado sensores inerciales (IMU) y visuales para incrementar la robustez del SLAM ante movimientos rápidos, oclusiones breves o entornos con poca textura. Primero hemos propuesto dos técnicas rápidas para la inicialización del sensor inercial, con un bajo error de escala. Estas han permitido empezar a usar la IMU tan pronto como 2 segundos después de lanzar el sistema. Una de estas inicializaciones ha sido integrada en un nuevo sistema de SLAM visual inercial, acuñado como ORB-SLAM3, el cual representa la mayor contribución de esta tesis. Este es el sistema de SLAM visual-inercial de código abierto más completo hasta la fecha, que funciona con cámaras monoculares o estéreo, estenopeicas o de ojo de pez, y con capacidades multimapa. ORB-SLAM3 se basa en una formulación de Máximo a Posteriori, tanto en la inicialización como en el refinamiento y el ajuste de haces visual-inercial. También explota la asociación de datos en el corto, medio y largo plazo. Todo esto hace que ORB-SLAM3 sea el sistema SLAM visual-inercial más preciso, como así demuestran nuestros resultados en experimentos públicos.Además, hemos explorado la aplicación de técnicas de aprendizaje profundo para mejorar la robustez del SLAM. En este aspecto, primero hemos propuesto DynaSLAM II, un sistema SLAM estéreo para entornos dinámicos. Los objetos dinámicos son segmentados mediante una red neuronal, y sus puntos y medidas son incluidas eficientemente en la optimización de ajuste de haces. Esto permite estimar y hacer seguimiento de los objetos en movimiento, al mismo tiempo que se mejora la estimación de la trayectoria de la cámara. En segundo lugar, hemos desarrollado un SLAM monocular y directo basado en predicciones de profundidad a través de redes neuronales. Optimizamos de manera conjunta tanto los residuos de predicción de profundidad como los fotométricos de distintas vistas, lo que da lugar a un sistema monocular capaz de estimar la escala. No sufre el problema de deriva de escala, siendo más robusto y varias veces más preciso que los sistemas monoculares clásicos.<br /

    NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR¿s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and Defense Advanced Research Projects Agency (DARPA)

    Visual-Inertial State Estimation With Information Deficiency

    Get PDF
    State estimation is an essential part of intelligent navigation and mapping systems where tracking the location of a smartphone, car, robot, or a human-worn device is required. For autonomous systems such as micro aerial vehicles and self-driving cars, it is a prerequisite for control and motion planning. For AR/VR applications, it is the first step to image rendering. Visual-inertial odometry (VIO) is the de-facto standard algorithm for embedded platforms because it lends itself to lightweight sensors and processors, and maturity in research and industrial development. Various approaches have been proposed to achieve accurate real-time tracking, and numerous open-source software and datasets are available. However, errors and outliers are common due to the complexity of visual measurement processes and environmental changes, and in practice, estimation drift is inevitable. In this thesis, we introduce the concept of information deficiency in state estimation and how to utilize this concept to develop and improve VIO systems. We look into the information deficiencies in visual-inertial state estimation, which are often present and ignored, causing system failures and drift. In particular, we investigate three critical cases of information deficiency in visual-inertial odometry: low texture environment with limited computation, monocular visual odometry, and inertial odometry. We consider these systems under three specific application settings: a lightweight quadrotor platform in autonomous flight, driving scenarios, and AR/VR headset for pedestrians. We address the challenges in each application setting and explore how the tight fusion of deep learning and model-based VIO can improve the state-of-the-art system performance and compensate for the lack of information in real-time. We identify deep learning as a key technology in tackling the information deficiencies in state estimation. We argue that developing hybrid frameworks that leverage its advantage and enable supervision for performance guarantee provides the most accurate and robust solution to state estimation

    Advanced LIDAR-based techniques for autonomous navigation of spaceborne and airborne platforms

    Get PDF
    The main goal of this PhD thesis is the development and performance assessment of innovative techniques for the autonomous navigation of aerospace platforms by exploiting data acquired by electro-optical sensors. Specifically, the attention is focused on active LIDAR systems since they globally provide a higher degree of autonomy with respect to passive sensors. Two different areas of research are addressed, namely the autonomous relative navigation of multi-satellite systems and the autonomous navigation of Unmanned Aerial Vehicles. The global aim is to provide solutions able to improve estimation accuracy, computational load, and overall robustness and reliability with respect to the techniques available in the literature. In the space field, missions like on-orbit servicing and active debris removal require a chaser satellite to perform autonomous orbital maneuvers in close-proximity of an uncooperative space target. In this context, a complete pose determination architecture is here proposed, which relies exclusively on three-dimensional measurements (point clouds) provided by a LIDAR system as well as on the knowledge of the target geometry. Customized solutions are envisaged at each step of the pose determination process (acquisition, tracking, refinement) to ensure adequate accuracy level while simultaneously limiting the computational load with respect to other approaches available in the literature. Specific strategies are also foreseen to ensure process robustness by autonomously detecting algorithms' failures. Performance analysis is realized by means of a simulation environment which is conceived to realistically reproduce LIDAR operation, target geometry, and multi-satellite relative dynamics in close-proximity. An innovative method to design trajectories for target monitoring, which are reliable for on-orbit servicing and active debris removal applications since they satisfy both safety and observation requirements, is also presented. On the other hand, the problem of localization and mapping of Unmanned Aerial Vehicles is also tackled since it is of utmost importance to provide autonomous safe navigation capabilities in mission scenarios which foresee flights in complex environments, such as GPS denied or challenging. Specifically, original solutions are proposed for the localization and mapping steps based on the integration of LIDAR and inertial data. Also in this case, particular attention is focused on computational load and robustness issues. Algorithms' performance is evaluated through off-line simulations carried out on the basis of experimental data gathered by means of a purposely conceived setup within an indoor test scenario

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
    corecore