91 research outputs found

    A multisensor SLAM for dense maps of large scale environments under poor lighting conditions

    Get PDF
    This thesis describes the development and implementation of a multisensor large scale autonomous mapping system for surveying tasks in underground mines. The hazardous nature of the underground mining industry has resulted in a push towards autonomous solutions to the most dangerous operations, including surveying tasks. Many existing autonomous mapping techniques rely on approaches to the Simultaneous Localization and Mapping (SLAM) problem which are not suited to the extreme characteristics of active underground mining environments. Our proposed multisensor system has been designed from the outset to address the unique challenges associated with underground SLAM. The robustness, self-containment and portability of the system maximize the potential applications.The multisensor mapping solution proposed as a result of this work is based on a fusion of omnidirectional bearing-only vision-based localization and 3D laser point cloud registration. By combining these two SLAM techniques it is possible to achieve some of the advantages of both approaches – the real-time attributes of vision-based SLAM and the dense, high precision maps obtained through 3D lasers. The result is a viable autonomous mapping solution suitable for application in challenging underground mining environments.A further improvement to the robustness of the proposed multisensor SLAM system is a consequence of incorporating colour information into vision-based localization. Underground mining environments are often dominated by dynamic sources of illumination which can cause inconsistent feature motion during localization. Colour information is utilized to identify and remove features resulting from illumination artefacts and to improve the monochrome based feature matching between frames.Finally, the proposed multisensor mapping system is implemented and evaluated in both above ground and underground scenarios. The resulting large scale maps contained a maximum offset error of ±30mm for mapping tasks with lengths over 100m

    Real-Time Multi-Fisheye Camera Self-Localization and Egomotion Estimation in Complex Indoor Environments

    Get PDF
    In this work a real-time capable multi-fisheye camera self-localization and egomotion estimation framework is developed. The thesis covers all aspects ranging from omnidirectional camera calibration to the development of a complete multi-fisheye camera SLAM system based on a generic multi-camera bundle adjustment method

    Multi-camera simultaneous localization and mapping

    Get PDF
    In this thesis, we study two aspects of simultaneous localization and mapping (SLAM) for multi-camera systems: minimal solution methods for the scaled motion of non-overlapping and partially overlapping two camera systems and enabling online, real-time mapping of large areas using the parallelism inherent in the visual simultaneous localization and mapping (VSLAM) problem. We present the only existing minimal solution method for six degree of freedom structure and motion estimation using a non-overlapping, rigid two camera system with known intrinsic and extrinsic calibration. One example application of our method is the three-dimensional reconstruction of urban scenes from video. Because our method does not require the cameras' fields-of-view to overlap, we are able to maximize coverage of the scene and avoid processing redundant, overlapping imagery. Additionally, we developed a minimal solution method for partially overlapping stereo camera systems to overcome degeneracies inherent to non-overlapping two-camera systems but still have a wide total field of view. The method takes two stereo images as its input. It uses one feature visible in all four views and three features visible across two temporal view pairs to constrain the system camera's motion. We show in synthetic experiments that our method creates rotation and translation estimates that are more accurate than the perspective three-point method as the overlap in the stereo camera's fields-of-view is reduced. A final part of this thesis is the development of an online, real-time visual SLAM system that achieves real-time speed by exploiting the parallelism inherent in the VSLAM problem. We show that feature tracking, relative pose estimation, and global mapping operations such as loop detection and loop correction can be effectively parallelized. Additionally, we demonstrate that a combination of short baseline, differentially tracked corner features, which can be tracked at high frame rates and wide baseline matchable but slower to compute features such as the scale-invariant feature transform can facilitate high speed visual odometry and at the same time support location recognition for loop detection and global geometric error correction

    Visual SLAM for Autonomous Navigation of MAVs

    Get PDF
    This thesis focuses on developing onboard visual simultaneous localization and mapping (SLAM) systems to enable autonomous navigation of micro aerial vehicles (MAVs), which is still a challenging topic considering the limited payload and computational capability that an MAV normally has. In MAV applications, the visual SLAM systems are required to be very efficient, especially when other visual tasks have to be done in parallel. Furthermore, robustness in pose tracking is highly desired in order to enable safe autonomous navigation of an MAV in three-dimensional (3D) space. These challenges motivate the work in this thesis in the following aspects. Firstly, the problem of visual pose estimation for MAVs using an artificial landmark is addressed. An artificial neural network (ANN) is used to robustly recognize this visual marker in cluttered environments. Then a computational projective-geometry method is implemented for relative pose computation based on the retrieved geometry information of the visual marker. The presented vision system can be used not only for pose control of MAVs, but also for providing accurate pose estimates to a monocular visual SLAM system serving as an automatic initialization module for both indoor and outdoor environments. Secondly, autonomous landing on an arbitrarily textured landing site during autonomous navigation of an MAV is achieved. By integrating an efficient local-feature-based object detection algorithm within a monocular visual SLAM system, the MAV is able to search for the landing site autonomously along a predefined path, and land on it once it has been found. Thus, the proposed monocular visual solution enables autonomous navigation of an MAV in parallel with landing site detection. This solution relaxes the assumption made in conventional vision-guided landing systems, which is that the landing site should be located inside the field of view (FOV) of the vision system before initiating the landing task. The third problem that is addressed in this thesis is multi-camera visual SLAM for robust pose tracking of MAVs. Due to the limited FOV of a single camera, pose tracking using monocular visual SLAM may easily fail when the MAV navigates in unknown environments. Previous work addresses this problem mainly by fusing information from other sensors, like an inertial measurement unit (IMU), to achieve robustness of the whole system, which does not improve the robustness of visual SLAM itself. This thesis investigates solutions for improving the pose tracking robustness of a visual SLAM system by utilizing multiple cameras. A mathematical analysis of how measurements from multiple cameras should be integrated in the optimization of visual SLAM is provided. The resulting theory allows those measurements to be used for both robust pose tracking and map updating of the visual SLAM system. Furthermore, such a multi-camera visual SLAM system is modified to be a robust constant-time visual odometry. By integrating this visual odometry with an efficient back-end which consists of loop-closure detection and pose-graph optimization processes, a near-constant time multi-camera visual SLAM system is achieved for autonomous navigation of MAVs in large-scale environments.Diese Arbeit konzentriert sich auf die Entwicklung von integrierten Systemen zur gleichzeitigen Lokalisierung und Kartierung (Simultaneous Localization and Mapping, SLAM) mit Hilfe visueller Sensoren, um die autonome Navigation von kleinen Luftfahrzeugen (Micro Aerial Vehicles, MAVs) zu ermöglichen. Dies ist noch immer ein anspruchsvolles Thema angesichts der meist begrenzten Nutzlast und Rechenleistung eines MAVs. Die dafĂŒr eingesetzten visuellen SLAM Systeme mĂŒssen sehr effizient zu sein, vor allem wenn parallel noch andere visuelle Aufgaben durchgefĂŒhrt werden sollen. DarĂŒber hinaus ist eine robuste PositionsschĂ€tzung sehr wichtig, um die sichere autonome Navigation des MAVs im dreidimensionalen (3D) Raum zu ermöglichen. Diese Herausforderungen motivieren die vorliegende Arbeit gemĂ€ĂŸ den folgenden Gesichtspunkten: Zuerst wird das Problem bearbeitet, die Pose eines MAVs mit Hilfe einer kĂŒnstlichen Markierung visuell zu schĂ€tzen. Ein kĂŒnstliches neuronales Netz wird verwendet, um diese visuelle Markierung auch in anspruchsvollen Umgebungen zuverlĂ€ssig zu erkennen. Anschließend wird ein Verfahren aus der projektiven Geometrie eingesetzt, um die relative Pose basierend auf der gemessenen Geometrie der visuellen Markierung zu ermitteln. Das vorgestellte Bildverarbeitungssystem kann nicht nur zur Regelung der Pose des MAVs verwendet werden, sondern auch genaue PosenschĂ€tzungen zur automatischen Initialisierung eines monokularen visuellen SLAM-Systems im Innen- und Außenbereich liefern. Anschließend wird die autonome Landung eines MAVs auf einem beliebig texturierten Landeplatz wĂ€hrend autonomer Navigation erreicht. Durch die Integration eines effizienten Objekterkennungsalgorithmus, basierend auf lokalen Bildmerkmalen in einem monokularen visuellen SLAM-System, ist das MAV in der Lage den Landeplatz autonom entlang einer vorgegebenen Strecke zu suchen, und auf ihm zu landen sobald er gefunden wurde. Die vorgestellte Lösung ermöglicht somit die autonome Navigation eines MAVs bei paralleler Landeplatzerkennung. Diese Lösung lockert die gĂ€ngige Annahme in herkömmlichen Systemen zum kameragefĂŒhrten Landen, dass der Landeplatz vor Beginn der Landung innerhalb des Sichtfelds des Bildverarbeitungssystems liegen muss. Das dritte in dieser Arbeit bearbeitete Problem ist visuelles SLAM mit mehreren Kameras zur robusten PosenschĂ€tzung fĂŒr MAVs. Aufgrund des begrenzten Sichtfelds von einer einzigen Kamera kann die PosenschĂ€tzung von monokularem visuellem SLAM leicht fehlschlagen, wenn sich das MAV in einer unbekannten Umgebung bewegt. FrĂŒhere Arbeiten versutchen dieses Problem hauptsĂ€chlich durch die Fusionierung von Informationen anderer Sensoren, z.B. eines Inertialsensors (Inertial Measurement Unit, IMU) zu lösen um eine höhere Robustheit des Gesamtsystems zu erreichen, was die Robustheit des visuellen SLAM-Systems selbst nicht verbessert. Die vorliegende Arbeit untersucht Lösungen zur Verbesserung der Robustheit der PosenschĂ€tzung eines visuellen SLAM-Systems durch die Verwendung mehrerer Kameras. Wie Messungen von mehreren Kameras in die Optimierung fĂŒr visuelles SLAM integriert werden können wird mathematisch analysiert. Die daraus resultierende Theorie erlaubt die Nutzung dieser Messungen sowohl zur robusten PosenschĂ€tzung als auch zur Aktualisierung der visuellen Karte. Ferner wird ein solches visuelles SLAM-System mit mehreren Kameras modifiziert, um in konstanter Laufzeit robuste visuelle Odometrie zu berechnen. Die Integration dieser visuellen Odometrie mit einem effizienten Back-End zur Erkennung von geschlossener Schleifen und der Optimierung des Posengraphen ermöglicht ein visuelles SLAM-System mit mehreren Kameras und fast konstanter Laufzeit zur autonomen Navigation von MAVs in großen Umgebungen

    Autonomous navigation and mapping of mobile robots based on 2D/3D cameras combination

    Get PDF
    Aufgrund der tendenziell zunehmenden Nachfrage an Systemen zur UnterstĂŒtzung des alltĂ€glichen Lebens gibt es derzeit ein großes Interesse an autonomen Systemen. Autonome Systeme werden in HĂ€usern, BĂŒros, Museen sowie in Fabriken eingesetzt. Sie können verschiedene Aufgaben erledigen, beispielsweise beim Reinigen, als Helfer im Haushalt, im Bereich der Sicherheit und Bildung, im Supermarkt sowie im Empfang als Auskunft, weil sie dazu verwendet werden können, die Verarbeitungszeit zu kontrollieren und prĂ€zise, zuverlĂ€ssige Ergebnisse zu liefern. Ein Forschungsgebiet autonomer Systeme ist die Navigation und Kartenerstellung. Das heißt, mobile Roboter sollen selbstĂ€ndig ihre Aufgaben erledigen und zugleich eine Karte der Umgebung erstellen, um navigieren zu können. Das Hauptproblem besteht darin, dass der mobile Roboter in einer unbekannten Umgebung, in der keine zusĂ€tzlichen Bezugsinformationen vorhanden sind, das GelĂ€nde erkunden und eine dreidimensionale Karte davon erstellen muss. Der Roboter muss seine Positionen innerhalb der Karte bestimmen. Es ist notwendig, ein unterscheidbares Objekt zu finden. Daher spielen die ausgewĂ€hlten Sensoren und der Register-Algorithmus eine relevante Rolle. Die Sensoren, die sowohl Tiefen- als auch Bilddaten liefern können, sind noch unzureichend. Der neue 3D-Sensor, nĂ€mlich der "Photonic Mixer Device" (PMD), erzeugt mit hoher Bildwiederholfrequenz eine Echtzeitvolumenerfassung des umliegenden Szenarios und liefert Tiefen- und Graustufendaten. Allerdings erfordert die höhere QualitĂ€t der dreidimensionalen Erkundung der Umgebung Details und Strukturen der OberflĂ€chen, die man nur mit einer hochauflösenden CCD-Kamera erhalten kann. Die vorliegende Arbeit prĂ€sentiert somit eine Exploration eines mobilen Roboters mit Hilfe der Kombination einer CCD- und PMD-Kamera, um eine dreidimensionale Karte der Umgebung zu erstellen. Außerdem wird ein Hochleistungsalgorithmus zur Erstellung von 3D Karten und zur PoseschĂ€tzung in Echtzeit unter Verwendung des "Simultaneous Localization and Mapping" (SLAM) Verfahrens prĂ€sentiert. Der autonom arbeitende, mobile Roboter soll ferner Aufgaben ĂŒbernehmen, wie z.B. die Erkennung von Objekten in ihrer Umgebung, um verschiedene praktische Aufgaben zu lösen. Die visuellen Daten der CCD-Kamera liefern nicht nur eine hohe Auflösung der Textur-Daten fĂŒr die Tiefendaten, sondern werden auch fĂŒr die Objekterkennung verwendet. Der "Iterative Closest Point" (ICP) Algorithmus benutzt zwei Punktwolken, um den Bewegungsvektor zu bestimmen. Schließlich sind die Auswertung der Korrespondenzen und die Rekonstruktion der Karte, um die reale Umgebung abzubilden, in dieser Arbeit enthalten.Presently, intelligent autonomous systems have to perform very interesting tasks due to trendy increases in support demands of human living. Autonomous systems have been used in various applications like houses, offices, museums as well as in factories. They are able to operate in several kinds of applications such as cleaning, household assistance, transportation, security, education and shop assistance because they can be used to control the processing time, and to provide precise and reliable output. One research field of autonomous systems is mobile robot navigation and map generation. That means the mobile robot should work autonomously while generating a map, which the robot follows. The main issue is that the mobile robot has to explore an unknown environment and to generate a three dimensional map of an unknown environment in case that there is not any further reference information. The mobile robot has to estimate its position and pose. It is required to find distinguishable objects. Therefore, the selected sensors and registered algorithms are significant. The sensors, which can provide both, depth as well as image data are still deficient. A new 3D sensor, namely the Photonic Mixer Device (PMD), generates a high rate output in real-time capturing the surrounding scenario as well as the depth and gray scale data. However, a higher quality of three dimension explorations requires details and textures of surfaces, which can be obtained from a high resolution CCD camera. This work hence presents the mobile robot exploration using the integration of CCD and PMD camera in order to create a three dimensional map. In addition, a high performance algorithm for 3D mapping and pose estimation of the locomotion in real time, using the "Simultaneous Localization and Mapping" (SLAM) technique is proposed. The flawlessly mobile robot should also handle the tasks, such as the recognition of objects in its environment, in order to achieve various practical missions. Visual input from the CCD camera not only delivers high resolution texture data on depth volume, but is also used for object recognition. The “Iterative Closest Point” (ICP) algorithm is using two sets of points to find out the translation and rotation vector between two scans. Finally, the evaluation of the correspondences and the reconstruction of the map to resemble the real environment are included in this thesis

    Quantitative Analysis of Non-Linear Probabilistic State Estimation Filters for Deployment on Dynamic Unmanned Systems

    Get PDF
    The work conducted in this thesis is a part of an EU Horizon 2020 research initiative project known as DigiArt. This part of the DigiArt project presents and explores the design, formulation and implementation of probabilistically orientated state estimation algorithms with focus towards unmanned system positioning and three-dimensional (3D) mapping. State estimation algorithms are considered an influential aspect of any dynamic system with autonomous capabilities. Possessing the ability to predictively estimate future conditions enables effective decision making and anticipating any possible changes in the environment. Initial experimental procedures utilised a wireless ultra-wide band (UWB) based communication network. This system functioned through statically situated beacon nodes used to localise a dynamically operating node. The simultaneous deployment of this UWB network, an unmanned system and a Robotic Total Station (RTS) with active and remote tracking features enabled the characterisation of the range measurement errors associated with the UWB network. These range error metrics were then integrated into an Range based Extended Kalman Filter (R-EKF) state estimation algorithm with active outlier identification to outperform the native approach used by the UWB system for two-dimensional (2D) pose estimation.The study was then expanded to focus on state estimation in 3D, where a Six Degreeof-Freedom EKF (6DOF-EKF) was designed using Light Detection and Ranging (LiDAR) as its primary observation source. A two step method was proposed which extracted information between consecutive LiDAR scans. Firstly, motion estimation concerning Cartesian states x, y and the unmanned system’s heading (ψ) was achieved through a 2D feature matching process. Secondly, the extraction and alignment of ground planes from the LiDAR scan enabled motion extraction for Cartesian position z and attitude angles roll (Ξ) and pitch (φ). Results showed that the ground plane alignment failed when two scans were at 10.5◩ offset. Therefore, to overcome this limitation an Error State Kalman Filter (ES-KF) was formulated and deployed as a sub-system within the 6DOF-EKF. This enabled the successful tracking of roll, pitch and the calculation of z. The 6DOF-EKF was seen to outperform the R-EKF and the native UWB approach, as it was much more stable, produced less noise in its position estimations and provided 3D pose estimation

    PORTABLE IMAGE-BASED HIGH PERFORMANCE MOBILE MAPPING SYSTEM IN UNDERGROUND ENVIRONMENTS – SYSTEM CONFIGURATION AND PERFORMANCE EVALUATION

    Get PDF
    The progression in urbanization increases the need for different types of underground infrastructure. Consequently, infrastructure and life cycle management are rapidly gaining in importance. Mobile reality capturing systems and cloud-based services exploiting georeferenced metric 3D imagery are already extensively used for infrastructure management in outdoor environments. These services minimise dangerous and expensive field visits or measurement campaigns. In this paper, we introduce the BIMAGE Backpack, a portable image-based mobile mapping system for 3D data acquisition in indoor environments. The system consists of a multi-head panorama camera, two multi-profile laser scanners and an inertial measurement unit. With this system, we carried out underground measurement campaigns in the Hagerbach Test Gallery, located in Flums Hochwiese, Switzerland. For our performance evaluations in two different tunnel sections, we employed LiDAR SLAM as well as advanced image-based georeferencing. The obtained absolute accuracies were in the range from 6.2 to 7.4 cm. The relative accuracy, which for many applications is even more important, was in the range of 2–6 mm. These figures demonstrate an accuracy improvement of the subsequent image-based georeferencing over LiDAR SLAM by about an order of magnitude. The investigations show the application potential of image-based portable mobile mapping systems for infrastructure inventory and management in large underground facilities
    • 

    corecore