401 research outputs found

    A combined three-dimensional digitisation and subsurface defect detection data using active infrared thermography

    No full text
    International audienceIn recent years, NonDestructive Testing (NDT) systems have been upgraded with three-dimensional information. Indeed, combine the three-dimensional and thermal information allows a more meaningful analysis. In the literature, the data for NDT and three-dimensional (3D) reconstruction analysis are commonly acquired from independent systems. However, the use of two such systems leads to error analysis during the data registration. In an attempt to overcome such problems, we propose a single system based on active thermography approach using heat point-source stimulation to get the 3D digitization as well as subsurface defect detection. The experiments are conducted on steel and aluminum objects, and a combined 3D / thermal-information is presented

    Constrained Gaussian mixture models based scan matching method

    Full text link
    © 2018 Australasian Robotics and Automation Association. All rights reserved. This paper presents a Gaussian mixture model (GMM) based robust scan matching method which implements GMM to represent 2D scan points and improves the accuracy of scan matching. The proposed method transfers each new scan to GMM first, exploiting the covariance of every GMM component to represent scan points. Compared with the conventional GMM based method of scan matching, our technique implements GMM similarity comparison to evaluate the overlaps between scans. In order to get rid of the poor convergence due to the inaccurate initial value given to the iteration process, we proposed a geometry-constraint-based GMM similarity calculation method, which is one contribution of this paper. Another contribution is we propose a dynamic scale factor making the cost function more adapted to different initial value. Experiments on simulated data are employed and the results indicate that our method is able to enlarge the valid range of initial value and accumulate small errors after sequential matchings

    Joint 2D to 3D image registration workflow for comparing multiple slice photographs and CT scans of apple fruit with internal disorders

    Full text link
    A large percentage of apples are affected by internal disorders after long-term storage, which makes them unacceptable in the supply chain. CT imaging is a promising technique for in-line detection of these disorders. Therefore, it is crucial to understand how different disorders affect the image features that can be observed in CT scans. This paper presents a workflow for creating datasets of image pairs of photographs of apple slices and their corresponding CT slices. By having CT and photographic images of the same part of the apple, the complementary information in both images can be used to study the processes underlying internal disorders and how internal disorders can be measured in CT images. The workflow includes data acquisition, image segmentation, image registration, and validation methods. The image registration method aligns all available slices of an apple within a single optimization problem, assuming that the slices are parallel. This method outperformed optimizing the alignment separately for each slice. The workflow was applied to create a dataset of 1347 slice photographs and their corresponding CT slices. The dataset was acquired from 107 'Kanzi' apples that had been stored in controlled atmosphere (CA) storage for 8 months. In this dataset, the distance between annotations in the slice photograph and the matching CT slice was, on average, 1.47±0.401.47 \pm 0.40 mm. Our workflow allows collecting large datasets of accurately aligned photo-CT image pairs, which can help distinguish internal disorders with a similar appearance on CT. With slight modifications, a similar workflow can be applied to other fruits or MRI instead of CT scans.Comment: 20 pages, 9 figures 13-Dec-2023 revision: The plan to make the paper part-one of a two-part series was cancelled. Therefore the title of this paper and the title in the reference to the part-two paper (Wood et al., 2023) were change

    Improving Scan Registration Methods Using Secondary Point Data Channels

    Get PDF
    Autonomous vehicle technology has advanced significantly in recent years and these vehicles are poised to make major strides into everyday use. Autonomous vehicles have already entered military and commercial use, performing the dirty, dull, and dangerous tasks that humans do not want to, or cannot perform. With any complex autonomy task for a mobile robot, a method is required to map the environment and to localize within that environment. In unknown environments when the mapping and localization stages are performed simultaneously, this is known as Simultaneous Localization and Mapping (SLAM). One key technology used to solve the SLAM problem involves matching sensor data in the form of point clouds. Scan registration attempts to find the transformation between two point clouds, or scans, which results in the optimal overlap of the scan information. One of the major drawbacks of existing approaches is the over-reliance on geometric features and a well structured environment in order to perform the registration. When insufficient geometric features are present to constrain the optimization, this is known as geometric degeneracy, and can be a common problem in typically environments. The reliability of these methods is of vital importance in order to improve the robustness of autonomous vehicles operating in uncontrolled environments. This thesis presents methods to improve upon existing scan registration methods by incorporating secondary information into the registration process. In this work, three methods are presented: Ground Segmented Iterative Closest Point (GSICP), Color Clus- tered Normal Distribution Transform (CCNDT), and Multi Channel Generalized Iterative Closest Point (MCGICP). Each method provides a unique addition to the scan registration literature and has its own set of benefits, limitations, and uses. GSICP segments the ground plane from a 3D scan then compresses the scan into a 2D plane. The points are then classified as either ground-adjacent, or non-ground-adjacent. Using this classification, a class constrained ICP registration is performed where only points of the same class can be corresponded. This results in the method essentially creating simulated edges for the registration to align. GSICP improves accuracy and robustness in sparse unstructured environments such as forests or rolling hills. When compared to existing methods on the Ford Vision and Lidar Dataset, GSICP shows a tighter variance in error values as well as a significant improvement in overall error. This method is also shown to be highly computationally efficient, running registrations on a low power system twice as fast as GICP, the next most accurate method. However, it does require the input scans to have specific characteristics such as a defined ground plane and spatially separated objects in the environment. This method is ideally suited for outdoor sparse environments and was used with great success by the University of Waterloo’s entry in the NASA Sample Return Robot Challenge. CCNDT provides a more adaptable method that is widely applicable to many common environments. CCNDT uses point cloud data which has been colorized either from an RGBD camera or a joint LIDAR and camera system. The method begins by clustering the points in the scan based on color and then uses the clusters to generate colored Gaussian distributions. These distributions are then used to calculate a color weighted distribution to distribution cost between all pairs of distributions. Exhaustively matching all pairs of distributions creates a smooth, continuous cost function that can be optimized efficiently. Experimental validation of the CCNDT method on the Ford and Freiburg datasets has shown that the method can perform 3D scan registrations more efficiently, three times faster on average then existing methods, and is capable of accurately registering any scans which have sufficient color variation to enable color clustering. MCGICP is a generalized approach that is capable of performing robustly in almost any situation. MCGICP uses secondary point information, such as color, intensity, etc., to augment the GICP method. MCGICP calculates a spacial covariance at each point such that the covariance normal to the local surface is set to a small value, indicating a high confidence matching surfaces, and the covariance tangent to the surface is determined based on the secondary information distribution. Having the covariance represented in both the tangential and normal directions causes non-trivial cost terms to be present in all directions. Additionally, the correspondence of points between scans is modified to use a higher dimensional search space, which incorporates the secondary descriptor channels as well as the covariance information at each point and allows for more robust point correspondences to be determined. The registration process can therefore converge more quickly due to the incorporation of additional information. The MCGICP method is capable of performing highly accurate scan registrations in almost any environmental situation. The method is validated using a diverse set of data including the Ford and Freiburg datasets, as well as a challenging degenerate dataset. MCGICP is shown to improve accuracy and reliability on all three datasets. MCGICP is robust to most common degeneracies as it incorporates multiple channels of information in an integrated approach that is reliable even in the most challenging cases. The results presented in this work demonstrate clear improvements over the existing scan registration methods. This work shows that by incorporating secondary information into the scan registration problem, more robust and accurate solutions can be obtained. Each method presented has its own unique benefits, which are valuable for a specific set of applications and environments

    Evolutionary Optimization Techniques for 3D Simultaneous Localization and Mapping

    Get PDF
    Mención Internacional en el título de doctorMobile robots are growing up in applications to move through indoors and outdoors environments, passing from teleoperated applications to autonomous applications like exploring or navigating. For a robot to move through a particular location, it needs to gather information about the scenario using sensors. These sensors allow the robot to observe, depending on the sensor data type. Cameras mostly give information in two dimensions, with colors and pixels representing an image. Range sensors give distances from the robot to obstacles. Depth Cameras mix both technologies to expand their information to three-dimensional information. Light Detection and Ranging (LiDAR) provides information about the distance to the sensor but expands its range to planes and three dimensions alongside precision. So, mobile robots use those sensors to scan the scenario while moving. If the robot already has a map, the sensors measure, and the robot finds features that correspond to features on the map to localize itself. Men have used Maps as a specialized form of representing the environment for more than 5000 years, becoming a piece of important information in today’s daily basics. Maps are used to navigate from one place to another, localize something inside some boundaries, or as a form of documentation of essential features. So naturally, an intuitive way of making an autonomous mobile robot is to implement geometrical information maps to represent the environment. On the other hand, if the robot does not have a previous map, it should build it while moving around. The robot computes the sensor information with the odometer sensor information to achieve this task. However, sensors have their own flaws due to precision, calibration, or accuracy. Furthermore, moving a robot has its physical constraints and faults that may occur randomly, like wheel drifting or mechanical miscalibration that may make the odometers fail in the measurement, causing misalignment during the map building. A novel technique was presented in the mid-90s to solve this problem and overpass the uncertainty of sensors while the robot is building the map, the Simultaneous Localization and Mapping algorithm (SLAM). Its goal is to build a map while the robot’s position is corrected based on the information of two or more consecutive scans matched together or find the rigid registration vector between them. This algorithm has been broadly studied and developed for almost 25 years. Nonetheless, it is highly relevant in innovations, modifications, and adaptations due to the advances in new sensors and the complexity of the scenarios in emerging mobile robotics applications. The scan matching algorithm aims to find a pose vector representing the transformation or movement between two robot observations by finding the best possible value after solving an equation representing a good transformation. It means searching for a solution in an optimum way. Typically this optimization process has been solved using classical optimization algorithms, like Newton’s algorithm or solving gradient and second derivatives formulations, yet this requires an initial guess or initial state that helps the algorithm point in the right direction, most of the time by getting this information from the odometers or inertial sensors. Although, it is not always possible to have or trust this information, as some scenarios are complex and reckon sensors fail. In order to solve this problem, this research presents the uses of evolutionary optimization algorithms, those with a meta-heuristics definition based on iterative evolution that mimics optimization processes that do not need previous information to search a limited range for solutions to solve a fitness function. The main goal of this dissertation is to study, develop and prove the benefits of evolutionary optimization algorithms in simultaneous localization and mapping for mobile robots in six degrees of freedom scenarios using LiDAR sensor information. This work introduces several evolutionary algorithms for scan matching, acknowledge a mixed fitness function for registration, solve simultaneous localization and matching in different scenarios, implements loop closure and error relaxation, and proves its performance at indoors, outdoors and underground mapping applications.Los robots móviles están creciendo en aplicaciones para moverse por entornos interiores y exteriores, pasando de aplicaciones teleoperadas a aplicaciones autónomas como explorar o navegar. Para que un robot se mueva a través de una ubicación en particular, necesita recopilar información sobre el escenario utilizando sensores. Estos sensores permiten que el robot observe, según el tipo de datos del sensor. Las cámaras en su mayoría brindan información en dos dimensiones, con colores y píxeles que representan una imagen. Los sensores de rango dan distancias desde el robot hasta los obstáculos. Las Cámaras de Profundidad mezclan ambas tecnologías para expandir su información a información tridimensional. Light Detection and Ranging (LiDAR) proporciona información sobre la distancia al sensor, pero amplía su rango a planos y tres dimensiones así como mejora la precisión. Por lo tanto, los robots móviles usan esos sensores para escanear el escenario mientras se mueven. Si el robot ya tiene un mapa, los sensores miden y el robot encuentra características que corresponden a características en dicho mapa para localizarse. La humanidad ha utilizado los mapas como una forma especializada de representar el medio ambiente durante más de 5000 años, convirtiéndose en una pieza de información importante en los usos básicos diarios de hoy en día. Los mapas se utilizan para navegar de un lugar a otro, localizar algo dentro de algunos límites o como una forma de documentación de características esenciales. Entonces, naturalmente, una forma intuitiva de hacer un robot móvil autónomo es implementar mapas de información geométrica para representar el entorno. Por otro lado, si el robot no tiene un mapa previo, deberá construirlo mientras se desplaza. El robot junta la información del sensor de distancias con la información del sensor del odómetro para lograr esta tarea de crear un mapa. Sin embargo, los sensores tienen sus propios defectos debido a la precisión, la calibración o la exactitud. Además, mover un robot tiene sus limitaciones físicas y fallas que pueden ocurrir aleatoriamente, como el desvío de las ruedas o una mala calibración mecánica que puede hacer que los contadores de desplazamiento fallen en la medición, lo que provoca una desalineación durante la construcción del mapa. A mediados de los años 90 se presentó una técnica novedosa para resolver este problema y superar la incertidumbre de los sensores mientras el robot construye el mapa, el algoritmo de localización y mapeo simultáneos (SLAM). Su objetivo es construir un mapa mientras se corrige la posición del robot en base a la información de dos o más escaneos consecutivos emparejados o encontrar el vector de correspondencia entre ellos. Este algoritmo ha sido ampliamente estudiado y desarrollado durante casi 25 años. No obstante, es muy relevante en innovaciones, modificaciones y adaptaciones debido a los avances en sensores y la complejidad de los escenarios en las aplicaciones emergentes de robótica móvil. El algoritmo de correspondencia de escaneo tiene como objetivo encontrar un vector de pose que represente la transformación o el movimiento entre dos observaciones del robot al encontrar el mejor valor posible después de resolver una ecuación que represente una buena transformación. Significa buscar una solución de forma óptima. Por lo general, este proceso de optimización se ha resuelto utilizando algoritmos de optimización clásicos, como el algoritmo de Newton o la resolución de formulaciones de gradientes y segundas derivadas, pero esto requiere una conjetura inicial o un estado inicial que ayude al algoritmo a apuntar en la dirección correcta, la mayoría de las veces obteniendo esta información de los sensores odometricos o sensores de inercia, aunque no siempre es posible tener o confiar en esta información, ya que algunos escenarios son complejos y los sensores fallan. Para resolver este problema, esta investigación presenta los usos de los algoritmos de optimización evolutiva, aquellos con una definición meta-heurística basada en la evolución iterativa que imita los procesos de optimización que no necesitan información previa para buscar dentro de un rango limitado el grupo de soluciones que resuelve una función de calidad. El objetivo principal de esta tesis es estudiar, desarrollar y probar los usos de algoritmos de optimización evolutiva en localización y mapeado simultáneos para robots móviles en escenarios de seis grados de libertad utilizando información de sensores LiDAR. Este trabajo introduce varios algoritmos evolutivos que resuelven la correspondencia entre medidas, soluciona el problema de SLAM, implementa una fusion de funciones objetivos y demuestra sus ventajas con pruebas en escenarios reales tanto en interiores, exteriores como mapeado de escenarios subterraneos.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Gerardo Fernández López.- Secretario: María Dolores Blanco Rojas.- Vocal: David Álvarez Sánche

    Efficient 3D Segmentation, Registration and Mapping for Mobile Robots

    Get PDF
    Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve the same or better results in the same or less time than related sophisticated approaches. In the context of robots operating in real-world environments, key challenges are perceiving objects of interest and obstacles as well as building maps of the environment and localizing therein. The goal of this thesis is to carefully analyze such problem formulations, to deduce valid assumptions and simplifications, and to develop simple solutions that are both robust and fast. All approaches make use of sensors capturing 3D information, such as consumer RGBD cameras. Comparative evaluations show the performance of the developed approaches. For identifying objects and regions of interest in manipulation tasks, a real-time object segmentation pipeline is proposed. It exploits several common assumptions of manipulation tasks such as objects being on horizontal support surfaces (and well separated). It achieves real-time performance by using particularly efficient approximations in the individual processing steps, subsampling the input data where possible, and processing only relevant subsets of the data. The resulting pipeline segments 3D input data with up to 30Hz. In order to obtain complete segmentations of the 3D input data, a second pipeline is proposed that approximates the sampled surface, smooths the underlying data, and segments the smoothed surface into coherent regions belonging to the same geometric primitive. It uses different primitive models and can reliably segment input data into planes, cylinders and spheres. A thorough comparative evaluation shows state-of-the-art performance while computing such segmentations in near real-time. The second part of the thesis addresses the registration of 3D input data, i.e., consistently aligning input captured from different view poses. Several methods are presented for different types of input data. For the particular application of mapping with micro aerial vehicles where the 3D input data is particularly sparse, a pipeline is proposed that uses the same approximate surface reconstruction to exploit the measurement topology and a surface-to-surface registration algorithm that robustly aligns the data. Optimization of the resulting graph of determined view poses then yields globally consistent 3D maps. For sequences of RGBD data this pipeline is extended to include additional subsampling steps and an initial alignment of the data in local windows in the pose graph. In both cases, comparative evaluations show a robust and fast alignment of the input data

    주행계 및 지도 작성을 위한 3차원 확률적 정규분포변환의 정합 방법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2019. 2. 이범희.로봇은 거리센서를 이용하여 위치한 환경의 공간 정보를 점군(point set) 형태로 수집할 수 있는데, 이렇게 수집한 정보를 환경의 복원에 이용할 수 있다. 또한, 로봇은 점군과 모델을 정합하는 위치를 추정할 수 있다. 거리센서가 수집한 점군이 2차원에서 3차원으로 확장되고 해상도가 높아지면서 점의 개수가 크게 증가하면서, NDT (normal distributions transform)를 이용한 정합이 ICP (iterative closest point)의 대안으로 부상하였다. NDT는 점군을 분포로 변환하여 공간을 표현하는 압축된 공간 표현 방법이다. 분포의 개수가 점의 개수에 비해 월등히 작기 때문에 ICP에 비해 빠른 성능을 가졌다. 그러나 NDT 정합 기반 위치 추정의 성능을 좌우하는 셀의 크기, 셀의 중첩 정도, 셀의 방향, 분포의 스케일, 대응쌍의 비중 등 파라미터를 설정하기가 매우 어렵다. 본 학위 논문에서는 이러한 어려움에 대응하여 NDT 정합 기반 위치 추정의 정확도를 향상할 수 있는 방법을 제안하였다. 본 논문은 표현법과 정합법 2개 파트로 나눌 수 있다. 표현법에 있어 본 논문은 다음 3개 방법을 제안하였다. 첫째, 본 논문에서는 분포의 퇴화를 막기 위해 경험적으로 공분산 행렬의 고유값을 수정하여 공간적 형태의 왜곡을 가져오는 문제점과 고해상도의 NDT를 생성할 때 셀당 점의 개수가 감소하며 구조를 반영하는 분포가 형성되지 않는 문제점을 주목했다. 이를 해결하기 위하여 각 점에 대해 불확실성을 부여하고, 평균과 분산의 기대값으로 수정한 확률적 NDT (PNDT, probabilistic NDT) 표현법을 제안하였다. 공간 정보의 누락 없이 모든 점을 분포로 변환한 NDT를 통해 향상된 정확도를 보인 PNDT는 샘플링을 통한 가을을 가능하도록 하였다. 둘째, 본 논문에서는 정육면체를 셀로 다루며, 셀을 중심좌표와 변의 길이로 정의한다. 또한, 셀들로 이뤄진 격자를 각 셀의 중심점 사이의 간격과 셀의 크기로 정의한다. 이러한 정의를 토대로, 본 논문에서는 셀의 확대를 통하여 셀을 중첩시키는 방법과 셀의 간격 조절을 통하여 셀을 중첩시키는 방법을 제안하였다. 본 논문은 기존 2D NDT에서 사용한 셀의 삽입법을 주목하였다. 단순입방구조를 이루는 기존 방법 외에 면심입방구조와 체심입방구조의 셀로 이뤄진 격자가 생성하였다. 그 다음 해당 격자를 이용하여 NDT를 생성하는 방법을 제안하였다. 또한, 이렇게 생성된 NDT를 정합할 때 많은 시간을 소요하기 때문에 대응쌍 검색 영역을 정의하여 정합 속도를 향상하였다. 셋째, 저사양 로봇들은 점군 지도를 NDT 지도로 압축하여 보관하는 것이 효율적이다. 그러나 로봇 포즈가 갱신되거나, 다개체 로봇간 랑데뷰가 일어나 지도를 공유 및 결합하는 경우 NDT의 분포 형태가 왜곡되는 문제가 발생한다. 이러한 문제를 해결하기 위하여 NDT 재생성 방법을 제안하였다. 정합법에 있어 본 논문은 다음 4개 방법을 제안하였다. 첫째, 점군의 각 점에 대해 대응되는 색상 정보가 제공될 때 색상 hue를 이용한 향상된 NDT 정합으로 각 대응쌍에 대해 hue의 유사도를 비중으로 사용하는 목적함수를 제안하였다. 둘째, 본 논문은은 다양한 크기의 위치 변화량에 대응하기 위한 다중 레이어 NDT 정합 (ML-NDT, multi-layered NDT)의 한계를 극복하기 위하여 키레이어 NDT 정합 (KL-NDT, key-layered NDT)을 제안하였다. KL-NDT는 각 해상도의 셀에서 활성화된 점의 개수 변화량을 척도로 키레이어를 결정한다. 또한 키레이어에서 위치의 추정값이 수렴할 때까지 정합을 수행하는 방식을 취하여 다음 키레이어에 더 좋은 초기값을 제공한다. 셋째, 본 논문은 이산적인 셀로 인해 NDT간 정합 기법인 NDT-D2D (distribution-to-distribution NDT)의 목적 함수가 비선형이며 국소 최저치의 완화를 위한 방법으로 신규 NDT와 모델 NDT에 독립된 스케일을 정의하고 스케일을 변화하며 정합하는 동적 스케일 기반 NDT 정합 (DSF-NDT-D2D, dynamic scaling factor-based NDT-D2D)을 제안하였다. 마지막으로, 본 논문은 소스 NDT와 지도간 증대적 정합을 이용한 주행계 추정 및 지도 작성 방법을 제안하였다. 이 방법은 로봇의 현재 포즈에 대한 초기값을 소스 점군에 적용한 뒤 NDT로 변환하여 지도 상 NDT와 가능한 한 유사한 NDT를 작성한다. 그 다음 로봇 포즈 및 소스 NDT의 GC (Gaussian component)를 고려하여 부분지도를 추출한다. 이렇게 추출한 부분지도와 소스 NDT는 다중 레이어 NDT 정합을 수행하여 정확한 주행계를 추정하고, 추정 포즈로 소스 점군을 회전 및 이동 후 기존 지도를 갱신한다. 이러한 과정을 통해 이 방법은 현재 최고 성능을 가진 LOAM (lidar odometry and mapping)에 비하여 더 높은 정확도와 더 빠른 처리속도를 보였다.The robot is a self-operating device using its intelligence, and autonomous navigation is a critical form of intelligence for a robot. This dissertation focuses on localization and mapping using a 3D range sensor for autonomous navigation. The robot can collect spatial information from the environment using a range sensor. This information can be used to reconstruct the environment. Additionally, the robot can estimate pose variations by registering the source point set with the model. Given that the point set collected by the sensor is expanded in three dimensions and becomes dense, registration using the normal distribution transform (NDT) has emerged as an alternative to the most commonly used iterative closest point (ICP) method. NDT is a compact representation which describes using a set of GCs (GC) converted from a point set. Because the number of GCs is much smaller than the number of points, with regard to the computation time, NDT outperforms ICP. However, the NDT has issues to be resolved, such as the discretization of the point set and the objective function. This dissertation is divided into two parts: representation and registration. For the representation part, first we present the probabilistic NDT (PNDT) to deal with the destruction and degeneration problems caused by the small cell size and the sparse point set. PNDT assigns an uncertainty to each point sample to convert a point set with fewer than four points into a distribution. As a result, PNDT allows for more precise registration using small cells. Second, we present lattice adjustment and cell insertion methods to overlap cells to overcome the discreteness problem of the NDT. In the lattice adjustment method, a lattice is expressed as the distance between the cells and the side length of each cell. In the cell insertion method, simple, face-centered-cubic, and body-centered-cubic lattices are compared. Third, we present a means of regenerating the NDT for the target lattice. A single robot updates its poses using simultaneous localization and mapping (SLAM) and fuses the NDT at each pose to update its NDT map. Moreover, multiple robots share NDT maps built with inconsistent lattices and fuse the maps. Because the simple fusion of the NDT maps can change the centers, shapes, and normal vectors of GCs, the regeneration method subdivides the NDT into truncated GCs using the target lattice and regenerates the NDT. For the registration part, first we present a hue-assisted NDT registration if the robot acquires color information corresponding to each point sample from a vision sensor. Each GC of the NDT has a distribution of the hue and uses the similarity of the hue distributions as the weight in the objective function. Second, we present a key-layered NDT registration (KL-NDT) method. The multi-layered NDT registration (ML-NDT) registers points to the NDT in multiple resolutions of lattices. However, the initial cell size and the number of layers are difficult to determine. KL-NDT determines the key layers in which the registration is performed based on the change of the number of activated points. Third, we present a method involving dynamic scaling factors of the covariance. This method scales the source NDT at zero initially to avoid a negative correlation between the likelihood and rotational alignment. It also scales the target NDT from the maximum scale to the minimum scale. Finally, we present a method of incremental registration of PNDTs which outperforms the state-of-the-art lidar odometry and mapping method.1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.1 Point Set Registration . . . . . . . . . . . . . . . . . . . . . 7 1.3.2 Incremental Registration for Odometry Estimation . . . . . . 16 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.5 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Preliminaries 21 2.1 NDT Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2 NDT Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3 NDT Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4 Transformation Matrix and The Parameter Vector . . . . . . . . . . . 27 2.5 Cubic Cell and Lattice . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.6 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.7 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.8 Evaluation of Registration . . . . . . . . . . . . . . . . . . . . . . . 31 2.9 Benchmark Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3 Probabilistic NDT Representation 34 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2 Uncertainty of Point Based on Sensor Model . . . . . . . . . . . . . . 36 3.3 Probabilistic NDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4 Generalization of NDT Registration Based on PNDT . . . . . . . . . 40 3.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.5.2 Evaluation of Representation . . . . . . . . . . . . . . . . . . 41 3.5.3 Evaluation of Registration . . . . . . . . . . . . . . . . . . . 46 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4 Interpolation for NDT Using Overlapped Regular Cells 51 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.2 Lattice Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3 Crystalline NDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.4.1 Lattice Adjustment . . . . . . . . . . . . . . . . . . . . . . . 56 4.4.2 Performance of Crystalline NDT . . . . . . . . . . . . . . . . 60 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5 Regeneration of Normal Distributions Transform 65 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.2 Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 67 5.2.1 Trivariate Normal Distribution . . . . . . . . . . . . . . . . . 67 5.2.2 Truncated Trivariate Normal Distribution . . . . . . . . . . . 67 5.3 Regeneration of NDT . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.3.1 Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.3.2 Subdivision of Gaussian Components . . . . . . . . . . . . . 70 5.3.3 Fusion of Gaussian Components . . . . . . . . . . . . . . . . 72 5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.4.1 Evaluation Metrics for Representation . . . . . . . . . . . . . 73 5.4.2 Representation Performance of the Regenerated NDT . . . . . 75 5.4.3 Computation Performance of the Regeneration . . . . . . . . 82 5.4.4 Application of Map Fusion . . . . . . . . . . . . . . . . . . . 83 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6 Hue-Assisted Registration 91 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.2 Preliminary of the HSV Model . . . . . . . . . . . . . . . . . . . . . 92 6.3 Colored Octree for Subdivision . . . . . . . . . . . . . . . . . . . . . 94 6.4 HA-NDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.5.1 Evaluation of HA-NDT against nhue . . . . . . . . . . . . . . 97 6.5.2 Evaluation of NDT and HA-NDT . . . . . . . . . . . . . . . 98 6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7 Key-Layered NDT Registration 103 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.2 Key-layered NDT-P2D . . . . . . . . . . . . . . . . . . . . . . . . . 105 7.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7.3.1 Evaluation of KL-NDT-P2D and ML-NDT-P2D . . . . . . . . 108 7.3.2 Evaluation of KL-NDT-D2D and ML-NDT-D2D . . . . . . . 111 7.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 8 Scaled NDT and The Multi-scale Registration 113 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 8.2 Scaled NDT representation and L2 distance . . . . . . . . . . . . . . 114 8.3 NDT-D2D with dynamic scaling factors of covariances . . . . . . . . 116 8.4 Range of scaling factors . . . . . . . . . . . . . . . . . . . . . . . . . 120 8.5 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8.5.1 Evaluation of the presented method without initial guess . . . 122 8.5.2 Application of odometry estimation . . . . . . . . . . . . . . 125 8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 9 Scan-to-map Registration 129 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 9.2 Multi-layered PNDT . . . . . . . . . . . . . . . . . . . . . . . . . . 130 9.3 NDT Incremental Registration . . . . . . . . . . . . . . . . . . . . . 132 9.3.1 Initialization of PNDT-Map . . . . . . . . . . . . . . . . . . 133 9.3.2 Generation of Source ML-PNDT . . . . . . . . . . . . . . . . 134 9.3.3 Reconstruction of The Target ML-PNDT . . . . . . . . . . . 134 9.3.4 Pose Estimation Based on Multi-layered Registration . . . . . 135 9.3.5 Update of PNDT-Map . . . . . . . . . . . . . . . . . . . . . 136 9.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 9.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 10 Conclusions 142 Bibliography 145 초록 159 감사의 글 162Docto
    corecore