94 research outputs found

    Tightly Coupled 3D Lidar Inertial Odometry and Mapping

    Full text link
    Ego-motion estimation is a fundamental requirement for most mobile robotic applications. By sensor fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable estimations. We introduce a tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing the cost derived from lidar and IMU measurements, the lidar-IMU odometry (LIO) can perform well with acceptable drift after long-term experiment, even in challenging cases where the lidar measurements can be degraded. Besides, to obtain more reliable estimations of the lidar poses, a rotation-constrained refinement algorithm (LIO-mapping) is proposed to further align the lidar poses with the global map. The experiment results demonstrate that the proposed method can estimate the poses of the sensor pair at the IMU update rate with high precision, even under fast motion conditions or with insufficient features.Comment: Accepted by ICRA 201

    LIO-PPF: Fast LiDAR-Inertial Odometry via Incremental Plane Pre-Fitting and Skeleton Tracking

    Full text link
    As a crucial infrastructure of intelligent mobile robots, LiDAR-Inertial odometry (LIO) provides the basic capability of state estimation by tracking LiDAR scans. The high-accuracy tracking generally involves the kNN search, which is used with minimizing the point-to-plane distance. The cost for this, however, is maintaining a large local map and performing kNN plane fit for each point. In this work, we reduce both time and space complexity of LIO by saving these unnecessary costs. Technically, we design a plane pre-fitting (PPF) pipeline to track the basic skeleton of the 3D scene. In PPF, planes are not fitted individually for each scan, let alone for each point, but are updated incrementally as the scene 'flows'. Unlike kNN, the PPF is more robust to noisy and non-strict planes with our iterative Principal Component Analyse (iPCA) refinement. Moreover, a simple yet effective sandwich layer is introduced to eliminate false point-to-plane matches. Our method was extensively tested on a total number of 22 sequences across 5 open datasets, and evaluated in 3 existing state-of-the-art LIO systems. By contrast, LIO-PPF can consume only 36% of the original local map size to achieve up to 4x faster residual computing and 1.92x overall FPS, while maintaining the same level of accuracy. We fully open source our implementation at https://github.com/xingyuuchen/LIO-PPF.Comment: IROS 202

    Advancements in Radar Odometry

    Full text link
    Radar odometry estimation has emerged as a critical technique in the field of autonomous navigation, providing robust and reliable motion estimation under various environmental conditions. Despite its potential, the complex nature of radar signals and the inherent challenges associated with processing these signals have limited the widespread adoption of this technology. This paper aims to address these challenges by proposing novel improvements to an existing method for radar odometry estimation, designed to enhance accuracy and reliability in diverse scenarios. Our pipeline consists of filtering, motion compensation, oriented surface points computation, smoothing, one-to-many radar scan registration, and pose refinement. The developed method enforces local understanding of the scene, by adding additional information through smoothing techniques, and alignment of consecutive scans, as a refinement posterior to the one-to-many registration. We present an in-depth investigation of the contribution of each improvement to the localization accuracy, and we benchmark our system on the sequences of the main datasets for radar understanding, i.e., the Oxford Radar RobotCar, MulRan, and Boreas datasets. The proposed pipeline is able to achieve superior results, on all scenarios considered and under harsh environmental constraints

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    Four years of multi-modal odometry and mapping on the rail vehicles

    Full text link
    Precise, seamless, and efficient train localization as well as long-term railway environment monitoring is the essential property towards reliability, availability, maintainability, and safety (RAMS) engineering for railroad systems. Simultaneous localization and mapping (SLAM) is right at the core of solving the two problems concurrently. In this end, we propose a high-performance and versatile multi-modal framework in this paper, targeted for the odometry and mapping task for various rail vehicles. Our system is built atop an inertial-centric state estimator that tightly couples light detection and ranging (LiDAR), visual, optionally satellite navigation and map-based localization information with the convenience and extendibility of loosely coupled methods. The inertial sensors IMU and wheel encoder are treated as the primary sensor, which achieves the observations from subsystems to constrain the accelerometer and gyroscope biases. Compared to point-only LiDAR-inertial methods, our approach leverages more geometry information by introducing both track plane and electric power pillars into state estimation. The Visual-inertial subsystem also utilizes the environmental structure information by employing both lines and points. Besides, the method is capable of handling sensor failures by automatic reconfiguration bypassing failure modules. Our proposed method has been extensively tested in the long-during railway environments over four years, including general-speed, high-speed and metro, both passenger and freight traffic are investigated. Further, we aim to share, in an open way, the experience, problems, and successes of our group with the robotics community so that those that work in such environments can avoid these errors. In this view, we open source some of the datasets to benefit the research community

    Optical Flow and Expansion Based Deep Temporal Up-Sampling of LIDAR Point Clouds

    Get PDF
    This paper proposes a framework that enables the online generation of virtual point clouds relying only on previous camera and point clouds and current camera measurements. The continuous usage of the pipeline generating virtual LIDAR measurements makes the temporal up-sampling of point clouds possible. The only requirement of the system is a camera with a higher frame rate than the LIDAR equipped to the same vehicle, which is usually provided. The pipeline first utilizes optical flow estimations from the available camera frames. Next, optical expansion is used to upgrade it to 3D scene flow. Following that, ground plane fitting is made on the previous LIDAR point cloud. Finally, the estimated scene flow is applied to the previously measured object points to generate the new point cloud. The framework’s efficiency is proved as state-of-the-art performance is achieved on the KITTI dataset

    Improving Scan Registration Methods Using Secondary Point Data Channels

    Get PDF
    Autonomous vehicle technology has advanced significantly in recent years and these vehicles are poised to make major strides into everyday use. Autonomous vehicles have already entered military and commercial use, performing the dirty, dull, and dangerous tasks that humans do not want to, or cannot perform. With any complex autonomy task for a mobile robot, a method is required to map the environment and to localize within that environment. In unknown environments when the mapping and localization stages are performed simultaneously, this is known as Simultaneous Localization and Mapping (SLAM). One key technology used to solve the SLAM problem involves matching sensor data in the form of point clouds. Scan registration attempts to find the transformation between two point clouds, or scans, which results in the optimal overlap of the scan information. One of the major drawbacks of existing approaches is the over-reliance on geometric features and a well structured environment in order to perform the registration. When insufficient geometric features are present to constrain the optimization, this is known as geometric degeneracy, and can be a common problem in typically environments. The reliability of these methods is of vital importance in order to improve the robustness of autonomous vehicles operating in uncontrolled environments. This thesis presents methods to improve upon existing scan registration methods by incorporating secondary information into the registration process. In this work, three methods are presented: Ground Segmented Iterative Closest Point (GSICP), Color Clus- tered Normal Distribution Transform (CCNDT), and Multi Channel Generalized Iterative Closest Point (MCGICP). Each method provides a unique addition to the scan registration literature and has its own set of benefits, limitations, and uses. GSICP segments the ground plane from a 3D scan then compresses the scan into a 2D plane. The points are then classified as either ground-adjacent, or non-ground-adjacent. Using this classification, a class constrained ICP registration is performed where only points of the same class can be corresponded. This results in the method essentially creating simulated edges for the registration to align. GSICP improves accuracy and robustness in sparse unstructured environments such as forests or rolling hills. When compared to existing methods on the Ford Vision and Lidar Dataset, GSICP shows a tighter variance in error values as well as a significant improvement in overall error. This method is also shown to be highly computationally efficient, running registrations on a low power system twice as fast as GICP, the next most accurate method. However, it does require the input scans to have specific characteristics such as a defined ground plane and spatially separated objects in the environment. This method is ideally suited for outdoor sparse environments and was used with great success by the University of Waterloo’s entry in the NASA Sample Return Robot Challenge. CCNDT provides a more adaptable method that is widely applicable to many common environments. CCNDT uses point cloud data which has been colorized either from an RGBD camera or a joint LIDAR and camera system. The method begins by clustering the points in the scan based on color and then uses the clusters to generate colored Gaussian distributions. These distributions are then used to calculate a color weighted distribution to distribution cost between all pairs of distributions. Exhaustively matching all pairs of distributions creates a smooth, continuous cost function that can be optimized efficiently. Experimental validation of the CCNDT method on the Ford and Freiburg datasets has shown that the method can perform 3D scan registrations more efficiently, three times faster on average then existing methods, and is capable of accurately registering any scans which have sufficient color variation to enable color clustering. MCGICP is a generalized approach that is capable of performing robustly in almost any situation. MCGICP uses secondary point information, such as color, intensity, etc., to augment the GICP method. MCGICP calculates a spacial covariance at each point such that the covariance normal to the local surface is set to a small value, indicating a high confidence matching surfaces, and the covariance tangent to the surface is determined based on the secondary information distribution. Having the covariance represented in both the tangential and normal directions causes non-trivial cost terms to be present in all directions. Additionally, the correspondence of points between scans is modified to use a higher dimensional search space, which incorporates the secondary descriptor channels as well as the covariance information at each point and allows for more robust point correspondences to be determined. The registration process can therefore converge more quickly due to the incorporation of additional information. The MCGICP method is capable of performing highly accurate scan registrations in almost any environmental situation. The method is validated using a diverse set of data including the Ford and Freiburg datasets, as well as a challenging degenerate dataset. MCGICP is shown to improve accuracy and reliability on all three datasets. MCGICP is robust to most common degeneracies as it incorporates multiple channels of information in an integrated approach that is reliable even in the most challenging cases. The results presented in this work demonstrate clear improvements over the existing scan registration methods. This work shows that by incorporating secondary information into the scan registration problem, more robust and accurate solutions can be obtained. Each method presented has its own unique benefits, which are valuable for a specific set of applications and environments

    Application of mixed and virtual reality in geoscience and engineering geology

    Get PDF
    Visual learning and efficient communication in mining and geotechnical practices is crucial, yet often challenging. With the advancement of Virtual Reality (VR) and Mixed Reality (MR) a new era of geovisualization has emerged. This thesis demonstrates the capabilities of a virtual continuum approach using varying scales of geoscience applications. An application that aids analyses of small-scale geological investigation was constructed using a 3D holographic drill core model. A virtual core logger was also developed to assist logging in the field and subsequent communication by visualizing the core in a complementary holographic environment. Enriched logging practices enhance interpretation with potential economic and safety benefits to mining and geotechnical infrastructure projects. A mine-scale model of the LKAB mine in Sweden was developed to improve communication on mining induced subsidence between geologists, engineers and the public. GPS, InSAR and micro-seismicity data were hosted in a single database, which was geovisualized through Virtual and Mixed Reality. The wide array of applications presented in this thesis illustrate the potential of Mixed and Virtual Reality and improvements gained on current conventional geological and geotechnical data collection, interpretation and communication at all scales from the micro- (e.g. thin section) to the macro- scale (e.g. mine)

    Développement d’un algorithme de détection de câbles électriques et de localisation pour drone d’inspection de lignes électriques haute tension

    Get PDF
    Afin d’améliorer l’efficacité de l’entretien de son réseau de lignes électriques haute tension (LEHT), Hydro-Québec met sur pied de nouvelles méthodes d’inspection de LEHT. Depuis quelques années, Hydro-Québec développe le LineDrone, un drone opéré par un pilote, qui peut atterrir et rouler sur les conducteurs afin de réaliser une inspection par contact direct de ceux-ci. Récemment, Hydro-Québec s'est associé à L'entreprise DroneVolt afin d'assurer la l'industrialisation et la commercialisation du LineDrone. Dans le but d’automatiser l’opération du LineDrone, une collaboration entre l’Université de Sherbrooke, Hydro-Québec et DroneVolt a été mise sur pied. Un des nombreux défis reliés à ce projet d’automatisation est la localisation du drone dans l’environnement des conducteurs des LEHT. Pour cela, un algorithme de détection de câbles électriques et de localisation pour drone d’inspection de LEHT a été développé. Cet algorithme permet de déterminer la position du drone dans son environnement à proximité des câbles des LEHT afin de lui permettre de naviguer et d’atterrir sur des conducteurs électriques de façon autonome. Ce mémoire présente le fonctionnement des différentes parties de l’algorithme de détection de câbles électriques et de localisation pour drone d’inspection de LEHT. De plus, les résultats d’essais en vol de l’algorithme sont présentés. Ces essais démontrent le bon fonctionnement de l’algorithme dans des scénarios d’utilisation réels. Finalement, des résultats sur les performances en précision et en temps de calcul sont présentés
    • …
    corecore