107 research outputs found

    RANSAC for Robotic Applications: A Survey

    Get PDF
    Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics.This work has been partially funded by the Basque Government, Spain, under Research Teams Grant number IT1427-22 and under ELKARTEK LANVERSO Grant number KK-2022/00065; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737

    Generation of Horizontally Curved Driving Lines for Autonomous Vehicles Using Mobile Laser Scanning Data

    Get PDF
    The development of autonomous vehicle desiderates tremendous advances in three-dimensional (3D) high-definition roadmaps. These roadmaps are capable of providing 3D positioning information with 10-to-20 cm accuracy. With the assistance of 3D high-definition roadmaps, the intractable autonomous driving problem is transformed into a solvable localization issue. The Mobile Laser Scanning (MLS) systems can collect accurate, high-density 3D point clouds in road environments for generating 3D high-definition roadmaps. However, few studies have been concentrated on the driving line generation from 3D MLS point clouds for highly autonomous driving, particularly for accident-prone horizontal curves with the problems of ambiguous traffic situations and unclear visual clues. This thesis attempts to develop an effective method for semi-automated generation of horizontally curved driving lines using MLS data. The framework of research methodology proposed in this thesis consists of three steps, including road surface extraction, road marking extraction, and driving line generation. Firstly, the points covering road surface are extracted using curb-based road surface extraction algorithms depending on both the elevation and slope differences. Then, road markings are identified and extracted according to a sequence of algorithms consisting of geo-referenced intensity image generation, multi-threshold road marking extraction, and statistical outlier removal. Finally, the conditional Euclidean clustering algorithm is employed followed by the nonlinear least-squares curve-fitting algorithm for generating horizontally curved driving lines. A total of six test datasets obtained in Xiamen, China by a RIEGL VMX-450 system were used to evaluate the performance and efficiency of the proposed methodology. The experimental results demonstrate that the proposed road marking extraction algorithms can achieve 90.89% in recall, 93.04% in precision and 91.95% in F1-score, respectively. Moreover, the unmanned aerial vehicle (UAV) imagery with 4 cm was used for validation of the proposed driving line generation algorithms. The validation results demonstrate that the horizontally curved driving lines can be effectively generated within 15 cm-level localization accuracy using MLS point clouds. Finally, a comparative study was conducted both visually and quantitatively to indicate the accuracy and reliability of the generated driving lines

    Path Planning Framework for Unmanned Ground Vehicles on Uneven Terrain

    Get PDF
    In this thesis, I address the problem of long-range path planning on uneven terrain for non-holonomic wheeled mobile robots (WMR). Uneven terrain path planning is essential for search-and-rescue, surveillance, military, humanitarian, agricultural, constructing missions, etc. These missions necessitate the generation of a feasible sequence of waypoints, or reference states, to navigate a WMR from the initial location to the final target location through the uneven terrain. The feasibility of navigating through a given path over uneven terrain can be undermined by various terrain features. Examples of such features are loose soil, vegetation, boulders, steeply sloped terrain, or a combination of all of these elements. I propose a three-stage framework to solve the problem of rapid long-range path planning. In the first stage, RRT-Connect provides a rapid discovery of the feasible solution. Afterward, Informed RRT* improves the feasible solution. Finally, Shortcut heuristics improves the solution locally. To improve the computational speed of path planning algorithms, we developed an accelerated version of the traversability estimation on point clouds based on Principal Component Analysis. The benchmarks demonstrate the efficacy of the path planning approach

    Development of a novel data acquisition and processing methodology applied to the boresight alignment of marine mobile LiDAR systems

    Get PDF
    Le systĂšme LiDAR mobile (SLM) est une technologie d'acquisition de donnĂ©es de pointe qui permet de cartographier les scĂšnes du monde rĂ©el en nuages de points 3D. Les applications du SLM sont trĂšs vastes, de la foresterie Ă  la modĂ©lisation 3D des villes, en passant par l'Ă©valuation de l'inventaire routier et la cartographie des infrastructures portuaires. Le SLM peut Ă©galement ĂȘtre montĂ© sur diverses plateformes, telles que des plateformes aĂ©riennes, terrestres, marines, etc. IndĂ©pendamment de l'application et de la plateforme, pour s'assurer que le SLM atteigne sa performance optimale et sa meilleure prĂ©cision, il est essentiel de traiter correctement les erreurs systĂ©matiques du systĂšme, spĂ©cialement l'erreur des angles de visĂ©e Ă  laquelle on s'intĂ©resse particuliĂšrement dans cette thĂšse. L'erreur des angles de visĂ©e est dĂ©finie comme le dĂ©salignement rotationnel des deux parties principales du SLM, le systĂšme de positionnement et d'orientation et le scanneur LiDAR, introduit par trois angles de visĂ©e. En fait, de petites variations angulaires dans ces paramĂštres peuvent causer des problĂšmes importants d'incertitude gĂ©omĂ©trique dans le nuage de points final et il est vital d'employer une mĂ©thode d'alignement pour faire face Ă  la problĂ©matique de l'erreur des angles de visĂ©e de ces systĂšmes. La plupart des mĂ©thodes existantes d'alignement des angles de visĂ©e qui ont Ă©tĂ© principalement dĂ©veloppĂ©es pour les SLM aĂ©riens et terrestres, tirent profit d'Ă©lĂ©ments in-situ spĂ©cifiques et prĂ©sents sur les sites de levĂ©s et adĂ©quats pour ces mĂ©thodes. Par exemple, les Ă©lĂ©ments linĂ©aires et planaires extraits des toits et des façades des maisons. Cependant, dans les environnements sans prĂ©sence de ces Ă©lĂ©ments saillants comme la forĂȘt, les zones rurales, les ports, oĂč l'accĂšs aux Ă©lĂ©ments appropriĂ©es pour l'alignement des angles de visĂ©e est presque impossible, les mĂ©thodes existantes fonctionnent mal, voire mĂȘme pas du tout. Par consĂ©quent, cette recherche porte sur l'alignement des angles de visĂ©e d'un SLM dans un environnement complexe. Nous souhaitons donc introduire une procĂ©dure d'acquisition et traitement pour une prĂ©paration adĂ©quate des donnĂ©es, qui servira Ă  la mĂ©thode d'alignement des angles de visĂ©e du SLM. Tout d'abord, nous explorons les diffĂ©rentes possibilitĂ©s des Ă©lĂ©ments utilisĂ©s dans les mĂ©thodes existantes qui peuvent aider Ă  l'identification de l'Ă©lĂ©ment offrant le meilleur potentiel pour l'estimation des angles de visĂ©e d'un SLM. Ensuite, nous analysons, parmi un grand nombre de possibles configurations d'Ă©lĂ©ments (cibles) et patrons de lignes de balayage, celle qui nous apparaĂźt la meilleure. Cette analyse est rĂ©alisĂ©e dans un environnement de simulation dans le but de gĂ©nĂ©rer diffĂ©rentes configurations de cibles et de lignes de balayage pour l'estimation des erreurs des angles de visĂ©e afin d'isoler la meilleure configuration possible. Enfin, nous validons la configuration proposĂ©e dans un scĂ©nario rĂ©el, soit l'Ă©tude de cas du port de MontrĂ©al. Le rĂ©sultat de la validation rĂ©vĂšle que la configuration proposĂ©e pour l'acquisition et le traitement des donnĂ©es mĂšne Ă  une mĂ©thode rigoureuse d'alignement des angles de visĂ©e qui est en mĂȘme temps prĂ©cise, robuste et rĂ©pĂ©table. Pour Ă©valuer les rĂ©sultats obtenus, nous avons Ă©galement mis en Ɠuvre une mĂ©thode d'Ă©valuation de la prĂ©cision relative, qui dĂ©montre l'amĂ©lioration de la prĂ©cision du nuage de points aprĂšs l'application de la procĂ©dure d'alignement des angles de visĂ©e.A Mobile LiDAR system (MLS) is a state-of-the-art data acquisition technology that maps real-world scenes in the form of 3D point clouds. The MLS's list of applications is vast, from forestry to 3D city modeling and road inventory assessment to port infrastructure mapping. The MLS can also be mounted on various platforms, such as aerial, terrestrial, marine, and so on. Regardless of the application and the platform, to ensure that the MLS achieves its optimal performance and best accuracy, it is essential to adequately address the systematic errors of the system, especially the boresight error. The boresight error is the rotational misalignment offset of the two main parts of the MLS, the positioning and orientation system (POS) and the LiDAR scanner. Minor angular parameter variations can cause important geometric accuracy issues in the final point cloud. Therefore, it is vital to employ an alignment method to cope with the boresight error problem of such systems. Most of the existing boresight alignment methods, which have been mainly developed for aerial and terrestrial MLS, take advantage of the in-situ tie-features in the environment that are adequate for these methods. For example, tie-line and tie-plane are extracted from building roofs and facades. However, in low-feature environments like forests, rural areas, ports, and harbors, where access to suitable tie-features for boresight alignment is nearly impossible, the existing methods malfunction or do not function. Therefore, this research addresses the boresight alignment of a marine MLS in a low-feature maritime environment. Thus, we aim to introduce an acquisition procedure for suitable data preparation, which will serve as input for the boresight alignment method of a marine MLS. First, we explore various tie-features introduced in the existing ways that eventually assist in the identification of the suitable tie-feature for the boresight alignment of a marine MLS. Second, we study the best configuration for the data acquisition procedure, i.e., tie-feature(s) characteristics and the necessary scanning line pattern. This study is done in a simulation environment to achieve the best visibility of the boresight errors on the selected suitable tie-feature. Finally, we validate the proposed configuration in a real-world scenario, which is the port of Montreal case study. The validation result reveals that the proposed data acquisition and processing configuration results in an accurate, robust, and repeatable rigorous boresight alignment method. We have also implemented a relative accuracy assessment to evaluate the obtained results, demonstrating an accuracy improvement of the point cloud after the boresight alignment procedure

    Using AI and Robotics for EV battery cable detection.: Development and implementation of end-to-end model-free 3D instance segmentation for industrial purposes

    Get PDF
    Master's thesis in Information- and communication technology (IKT590)This thesis describes a novel method for capturing point clouds and segmenting instances of cabling found on electric vehicle battery packs. The use of cutting-edge perception algorithm architectures, such as graph-based and voxel-based convolution, in industrial autonomous lithium-ion battery pack disassembly is being investigated. The thesis focuses on the challenge of getting a desirable representation of any battery pack using an ABB robot in conjunction with a high-end structured light camera, with "end-to-end" and "model-free" as design constraints. The thesis employs self-captured datasets comprised of several battery packs that have been captured and labeled. Following that, the datasets are used to create a perception system. This thesis recommends using HDR functionality in an industrial application to capture the full dynamic range of the battery packs. To adequately depict 3D features, a three-point-of-view capture sequence is deemed necessary. A general capture process for an entire battery pack is also presented, but a next-best-scan algorithm is likely required to ensure a "close to complete" representation. Graph-based deep-learning algorithms have been shown to be capable of being scaled up to50,000inputs while still exhibiting strong performance in terms of accuracy and processing time. The results show that an instance segmenting system can be implemented in less than two seconds. Using off-the-shelf hardware, demonstrate that a 3D perception system is industrially viable and competitive with a 2D perception system

    Perception of Unstructured Environments for Autonomous Off-Road Vehicles

    Get PDF
    Autonome Fahrzeuge benötigen die FĂ€higkeit zur Perzeption als eine notwendige Voraussetzung fĂŒr eine kontrollierbare und sichere Interaktion, um ihre Umgebung wahrzunehmen und zu verstehen. Perzeption fĂŒr strukturierte Innen- und Außenumgebungen deckt wirtschaftlich lukrative Bereiche, wie den autonomen Personentransport oder die Industrierobotik ab, wĂ€hrend die Perzeption unstrukturierter Umgebungen im Forschungsfeld der Umgebungswahrnehmung stark unterreprĂ€sentiert ist. Die analysierten unstrukturierten Umgebungen stellen eine besondere Herausforderung dar, da die vorhandenen, natĂŒrlichen und gewachsenen Geometrien meist keine homogene Struktur aufweisen und Ă€hnliche Texturen sowie schwer zu trennende Objekte dominieren. Dies erschwert die Erfassung dieser Umgebungen und deren Interpretation, sodass Perzeptionsmethoden speziell fĂŒr diesen Anwendungsbereich konzipiert und optimiert werden mĂŒssen. In dieser Dissertation werden neuartige und optimierte Perzeptionsmethoden fĂŒr unstrukturierte Umgebungen vorgeschlagen und in einer ganzheitlichen, dreistufigen Pipeline fĂŒr autonome GelĂ€ndefahrzeuge kombiniert: Low-Level-, Mid-Level- und High-Level-Perzeption. Die vorgeschlagenen klassischen Methoden und maschinellen Lernmethoden (ML) zur Perzeption bzw.~Wahrnehmung ergĂ€nzen sich gegenseitig. DarĂŒber hinaus ermöglicht die Kombination von Perzeptions- und Validierungsmethoden fĂŒr jede Ebene eine zuverlĂ€ssige Wahrnehmung der möglicherweise unbekannten Umgebung, wobei lose und eng gekoppelte Validierungsmethoden kombiniert werden, um eine ausreichende, aber flexible Bewertung der vorgeschlagenen Perzeptionsmethoden zu gewĂ€hrleisten. Alle Methoden wurden als einzelne Module innerhalb der in dieser Arbeit vorgeschlagenen Perzeptions- und Validierungspipeline entwickelt, und ihre flexible Kombination ermöglicht verschiedene Pipelinedesigns fĂŒr eine Vielzahl von GelĂ€ndefahrzeugen und AnwendungsfĂ€llen je nach Bedarf. Low-Level-Perzeption gewĂ€hrleistet eine eng gekoppelte Konfidenzbewertung fĂŒr rohe 2D- und 3D-Sensordaten, um SensorausfĂ€lle zu erkennen und eine ausreichende Genauigkeit der Sensordaten zu gewĂ€hrleisten. DarĂŒber hinaus werden neuartige Kalibrierungs- und RegistrierungsansĂ€tze fĂŒr Multisensorsysteme in der Perzeption vorgestellt, welche lediglich die Struktur der Umgebung nutzen, um die erfassten Sensordaten zu registrieren: ein halbautomatischer Registrierungsansatz zur Registrierung mehrerer 3D~Light Detection and Ranging (LiDAR) Sensoren und ein vertrauensbasiertes Framework, welches verschiedene Registrierungsmethoden kombiniert und die Registrierung verschiedener Sensoren mit unterschiedlichen Messprinzipien ermöglicht. Dabei validiert die Kombination mehrerer Registrierungsmethoden die Registrierungsergebnisse in einer eng gekoppelten Weise. Mid-Level-Perzeption ermöglicht die 3D-Rekonstruktion unstrukturierter Umgebungen mit zwei Verfahren zur SchĂ€tzung der DisparitĂ€t von Stereobildern: ein klassisches, korrelationsbasiertes Verfahren fĂŒr Hyperspektralbilder, welches eine begrenzte Menge an Test- und Validierungsdaten erfordert, und ein zweites Verfahren, welches die DisparitĂ€t aus Graustufenbildern mit neuronalen Faltungsnetzen (CNNs) schĂ€tzt. Neuartige DisparitĂ€tsfehlermetriken und eine Evaluierungs-Toolbox fĂŒr die 3D-Rekonstruktion von Stereobildern ergĂ€nzen die vorgeschlagenen Methoden zur DisparitĂ€tsschĂ€tzung aus Stereobildern und ermöglichen deren lose gekoppelte Validierung. High-Level-Perzeption konzentriert sich auf die Interpretation von einzelnen 3D-Punktwolken zur Befahrbarkeitsanalyse, Objekterkennung und Hindernisvermeidung. Eine DomĂ€nentransferanalyse fĂŒr State-of-the-art-Methoden zur semantischen 3D-Segmentierung liefert Empfehlungen fĂŒr eine möglichst exakte Segmentierung in neuen ZieldomĂ€nen ohne eine Generierung neuer Trainingsdaten. Der vorgestellte Trainingsansatz fĂŒr 3D-Segmentierungsverfahren mit CNNs kann die benötigte Menge an Trainingsdaten weiter reduzieren. Methoden zur ErklĂ€rbarkeit kĂŒnstlicher Intelligenz vor und nach der Modellierung ermöglichen eine lose gekoppelte Validierung der vorgeschlagenen High-Level-Methoden mit Datensatzbewertung und modellunabhĂ€ngigen ErklĂ€rungen fĂŒr CNN-Vorhersagen. Altlastensanierung und MilitĂ€rlogistik sind die beiden HauptanwendungsfĂ€lle in unstrukturierten Umgebungen, welche in dieser Arbeit behandelt werden. Diese Anwendungsszenarien zeigen auch, wie die LĂŒcke zwischen der Entwicklung einzelner Methoden und ihrer Integration in die Verarbeitungskette fĂŒr autonome GelĂ€ndefahrzeuge mit Lokalisierung, Kartierung, Planung und Steuerung geschlossen werden kann. Zusammenfassend lĂ€sst sich sagen, dass die vorgeschlagene Pipeline flexible Perzeptionslösungen fĂŒr autonome GelĂ€ndefahrzeuge bietet und die begleitende Validierung eine exakte und vertrauenswĂŒrdige Perzeption unstrukturierter Umgebungen gewĂ€hrleistet

    UAV-Enabled Surface and Subsurface Characterization for Post-Earthquake Geotechnical Reconnaissance

    Full text link
    Major earthquakes continue to cause significant damage to infrastructure systems and the loss of life (e.g. 2016 Kaikoura, New Zealand; 2016 Muisne, Ecuador; 2015 Gorkha, Nepal). Following an earthquake, costly human-led reconnaissance studies are conducted to document structural or geotechnical damage and to collect perishable field data. Such efforts are faced with many daunting challenges including safety, resource limitations, and inaccessibility of sites. Unmanned Aerial Vehicles (UAV) represent a transformative tool for mitigating the effects of these challenges and generating spatially distributed and overall higher quality data compared to current manual approaches. UAVs enable multi-sensor data collection and offer a computational decision-making platform that could significantly influence post-earthquake reconnaissance approaches. As demonstrated in this research, UAVs can be used to document earthquake-affected geosystems by creating 3D geometric models of target sites, generate 2D and 3D imagery outputs to perform geomechanical assessments of exposed rock masses, and characterize subsurface field conditions using techniques such as in situ seismic surface wave testing. UAV-camera systems were used to collect images of geotechnical sites to model their 3D geometry using Structure-from-Motion (SfM). Key examples of lessons learned from applying UAV-based SfM to reconnaissance of earthquake-affected sites are presented. The results of 3D modeling and the input imagery were used to assess the mechanical properties of landslides and rock masses. An automatic and semi-automatic 2D fracture detection method was developed and integrated with a 3D, SfM, imaging framework. A UAV was then integrated with seismic surface wave testing to estimate the shear wave velocity of the subsurface materials, which is a critical input parameter in seismic response of geosystems. The UAV was outfitted with a payload release system to autonomously deliver an impulsive seismic source to the ground surface for multichannel analysis of surface waves (MASW) tests. The UAV was found to offer a mobile but higher-energy source than conventional seismic surface wave techniques and is the foundational component for developing the framework for fully-autonomous in situ shear wave velocity profiling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145793/1/wwgreen_1.pd
    • 

    corecore