112 research outputs found

    무인 자율주행 차량을 위한 단안 카메라 기반 실시간 주행 환경 인식 기법에 관한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기공학부, 2014. 2. 서승우.Homo Faber, refers to humans as controlling the environments through tools. From the beginning of the world, humans create tools for chasing the convenient life. The desire for the rapid movement let the human ride on horseback, make the wagon and finally make the vehicle. The vehicle made humans possible to travel the long distance very quickly as well as conveniently. However, since human being itself is imperfect, plenty of people have died due to the car accident, and people are dying at this moment. The research for autonomous vehicle has been conducted to satisfy the humans desire of the safety as the best alternative. And, the dream of autonomous vehicle will be come true in the near future. For the implementation of autonomous vehicle, many kinds of techniques are required, among which, the recognition of the environment around the vehicle is one of the most fundamental and important problems. For the recognition of surrounding objects many kinds of sensors can be utilized, however, the monocular camera can collect the largest information among sensors as well as can be utilized for the variety of purposes, and can be adopted for the various vehicle types due to the good price competitiveness. I expect that the research using the monocular camera for autonomous vehicle is very practical and useful. In this dissertation, I cover four important recognition problems for autonomous driving by using monocular camera in vehicular environment. Firstly, to drive the way autonomously the vehicle has to recognize lanes and keep its lane. However, the detection of lane markings under the various illuminant variation is very difficult in the image processing area. Nevertheless, it must be solved for the autonomous driving. The first research topic is the robust lane marking extraction under the illumination variations for multilane detection. I proposed the new lane marking extraction filter that can detect the imperfect lane markings as well as the new false positive cancelling algorithm that can eliminate noise markings. This approach can extract lane markings successfully even under the bad illumination conditions. Secondly, the problem to tackle, is if there is no lane marking on the road, then how the autonomous vehicle can recognize the road to run? In addition, what is the current lane position of the road? The latter is the important question since we can make a decision for lane change or keeping depending on the current position of lane. The second research is for handling those two problems, and I proposed the approach for the fusing the road detection and the lane position estimation. Thirdly, to drive more safely, keeping the safety distance is very important. Additionally, many equipments for the driving safety require the distance information. Measuring accurate inter-vehicle distance by using monocular camera and line laser is the third research topic. To measure the inter-vehicle distance, I illuminate the line laser on the front side of vehicle, and measure the length of the laser line and lane width in the image. Based on the imaging geometry, the distance calculation problem can be solved with accuracy. There are still many of important problems are remaining to be solved, and I proposed some approaches by using the monocular camera to handle the important problems. I expect very active researches will be continuously conducted and, based on the researches, the era of autonomous vehicle will come in the near future.1 Introduction 1.1 Background and Motivations 1.2 Contributions and Outline of the Dissertation 1.2.1 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 1.2.2 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 1.2.3 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 2 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 2.1 Introduction 2.2 Lane Marking Candidate Extraction Filter 2.2.1 Requirements of the Filter 2.2.2 A Comparison of Filter Characteristics 2.2.3 Cone Hat Filter 2.3 Overview of the Proposed Algorithm 2.3.1 Filter Width Estimation 2.3.2 Top Hat (Cone Hat) Filtering 2.3.3 Reiterated Extraction 2.3.4 False Positive Cancelling 2.3.4.1 Lane Marking Center Point Extraction 2.3.4.2 Fast Center Point Segmentation 2.3.4.3 Vanishing Point Detection 2.3.4.4 Segment Extraction 2.3.4.5 False Positive Filtering 2.4 Experiments and Evaluation 2.4.1 Experimental Set-up 2.4.2 Conventional Algorithm for Evaluation 2.4.2.1 Global threshold 2.4.2.2 Positive Negative Gradient 2.4.2.3 Local Threshold 2.4.2.4 Symmetry Local Threshold 2.4.2.5 Double Extraction using Symmetry Local Threshold 2.4.2.6 Gaussian Filter 2.4.3 Experimental Results 2.4.4 Summary 3 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 3.1 Introduction 3.2 Chromaticity-based Flood-fill Method 3.2.1 Illuminant-Invariant Space 3.2.2 Road Pixel Selection 3.2.3 Flood-fill Algorithm 3.3 Lane Position Estimation 3.3.1 Lane Marking Extraction 3.3.2 Proposed Lane Position Detection Algorithm 3.3.3 Birds-eye View Transformation by using the Proposed Dynamic Homography Matrix Generation 3.3.4 Next Lane Position Estimation based on the Cross-ratio 3.3.5 Forward-looking View Transformation 3.4 Information Fusion Between Road Detection and Lane Position Estimation 3.4.1 The Case of Detection Failures 3.4.2 The Benefit of Information Fusion 3.5 Experiments and Evaluation 3.6 Summary 4 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 4.1 Introduction 4.2 Proposed Distance Measurement Algorithm 4.3 Experiments and Evaluation 4.3.1 Experimental System Set-up 4.3.2 Experimental Results 4.4 Summary 5 ConclusionDocto

    Real-time vehicle detection using low-cost sensors

    Get PDF
    Improving road safety and reducing the number of accidents is one of the top priorities for the automotive industry. As human driving behaviour is one of the top causation factors of road accidents, research is working towards removing control from the human driver by automating functions and finally introducing a fully Autonomous Vehicle (AV). A Collision Avoidance System (CAS) is one of the key safety systems for an AV, as it ensures all potential threats ahead of the vehicle are identified and appropriate action is taken. This research focuses on the task of vehicle detection, which is the base of a CAS, and attempts to produce an effective vehicle detector based on the data coming from a low-cost monocular camera. Developing a robust CAS based on low-cost sensor is crucial to bringing the cost of safety systems down and in this way, increase their adoption rate by end users. In this work, detectors are developed based on the two main approaches to vehicle detection using a monocular camera. The first is the traditional image processing approach where visual cues are utilised to generate potential vehicle locations and at a second stage, verify the existence of vehicles in an image. The second approach is based on a Convolutional Neural Network, a computationally expensive method that unifies the detection process in a single pipeline. The goal is to determine which method is more appropriate for real-time applications. Following the first approach, a vehicle detector based on the combination of HOG features and SVM classification is developed. The detector attempts to optimise performance by modifying the detection pipeline and improve run-time performance. For the CNN-based approach, six different network models are developed and trained end to end using collected data, each with a different network structure and parameters, in an attempt to determine which combination produces the best results. The evaluation of the different vehicle detectors produced some interesting findings; the first approach did not manage to produce a working detector, while the CNN-based approach produced a high performing vehicle detector with an 85.87% average precision and a very low miss rate. The detector managed to perform well under different operational environments (motorway, urban and rural roads) and the results were validated using an external dataset. Additional testing of the vehicle detector indicated it is suitable as a base for safety applications such as CAS, with a run time performance of 12FPS and potential for further improvements.</div

    Rapid Inspection of Pavement Markings Using Mobile Laser Scanning Point Clouds

    Get PDF
    Intelligent Transportation System (ITS) is the combination of information technology, sensors and communications for more efficient, safer, more secure and more eco-friendly surface transport. One of the most viable forms of ITS is the driverless car, which exist mainly as prototypes. Serval automobile manufacturers (e.g. Ford, GM, BMW, Toyota, Tesla, Honda) and non-automobile companies (e.g. Apple, Google, Nokia, Baidu, Huawei) have invested in this field, and wider commercialization of the driverless car is estimated in 2025 to 2030. Currently, the key elements of the driverless car are the sensors and a prior 3D map. The sensors mounted on the vehicle are the “eyes” of the driverless car to capture the 3D data of its environment. Comparing its environment and a pre-prepared prior known 3D map, the driverless car can distinguish moving targets (e.g. vehicles, pedestrians) and permanent surface features (e.g. buildings, trees, roads, traffic signs) and take relevant actions. With a centimetre-accuracy prior map, the intractable perception problem is transformed into a solvable localization task. The most important technology for generating the prior map is Mobile Laser Scanning (MLS). MLS technology can safely and rapidly acquire highly dense and accurate georeferenced 3D point clouds with the measurement of surface reflectivity. Therefore, the 3D point clouds with intensity data not only offer the detailed 3D surface of the road but also contains pavement marking information that are embedded in the prior map for automatic navigation. Relevant researches have been focused on the pavement marking extraction from MLS data to collect, update and maintain the 3D prior maps. However, the accuracy and efficiency of automatic extraction of pavement markings can be further improved by intensity correction and window-based enhancement. Thus, this study aims at building a robust method for semi-automated information extraction of pavement markings detected from MLS point clouds. The proposed workflow consists of three components: preprocessing, extraction, and classification. In preprocessing, the 3D MLS point clouds are converted into the radiometrically corrected and enhanced 2D intensity imagery of the road surface. Then the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu’s thresholding, neighbour-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters using a manually defined decision tree. Case studies are conducted using the MLS datasets acquired in Kingston (Ontario, Canada) and Xiamen (Fujian, China), respectively, with significantly different road environments by two RIEGL VMX-450 systems. The results demonstrated that the proposed workflow and method can achieve 93% in completeness, 95% in correctness, and 94% in F-score respectively when using Xiamen dataset, while 84%, 93%, 89% respectively when using Kingston dataset

    Geometric data understanding : deriving case specific features

    Get PDF
    There exists a tradition using precise geometric modeling, where uncertainties in data can be considered noise. Another tradition relies on statistical nature of vast quantity of data, where geometric regularity is intrinsic to data and statistical models usually grasp this level only indirectly. This work focuses on point cloud data of natural resources and the silhouette recognition from video input as two real world examples of problems having geometric content which is intangible at the raw data presentation. This content could be discovered and modeled to some degree by such machine learning (ML) approaches like deep learning, but either a direct coverage of geometry in samples or addition of special geometry invariant layer is necessary. Geometric content is central when there is a need for direct observations of spatial variables, or one needs to gain a mapping to a geometrically consistent data representation, where e.g. outliers or noise can be easily discerned. In this thesis we consider transformation of original input data to a geometric feature space in two example problems. The first example is curvature of surfaces, which has met renewed interest since the introduction of ubiquitous point cloud data and the maturation of the discrete differential geometry. Curvature spectra can characterize a spatial sample rather well, and provide useful features for ML purposes. The second example involves projective methods used to video stereo-signal analysis in swimming analytics. The aim is to find meaningful local geometric representations for feature generation, which also facilitate additional analysis based on geometric understanding of the model. The features are associated directly to some geometric quantity, and this makes it easier to express the geometric constraints in a natural way, as shown in the thesis. Also, the visualization and further feature generation is much easier. Third, the approach provides sound baseline methods to more traditional ML approaches, e.g. neural network methods. Fourth, most of the ML methods can utilize the geometric features presented in this work as additional features.Geometriassa käytetään perinteisesti tarkkoja malleja, jolloin datassa esiintyvät epätarkkuudet edustavat melua. Toisessa perinteessä nojataan suuren datamäärän tilastolliseen luonteeseen, jolloin geometrinen säännönmukaisuus on datan sisäsyntyinen ominaisuus, joka hahmotetaan tilastollisilla malleilla ainoastaan epäsuorasti. Tämä työ keskittyy kahteen esimerkkiin: luonnonvaroja kuvaaviin pistepilviin ja videohahmontunnistukseen. Nämä ovat todellisia ongelmia, joissa geometrinen sisältö on tavoittamattomissa raakadatan tasolla. Tämä sisältö voitaisiin jossain määrin löytää ja mallintaa koneoppimisen keinoin, esim. syväoppimisen avulla, mutta joko geometria pitää kattaa suoraan näytteistämällä tai tarvitaan neuronien lisäkerros geometrisia invariansseja varten. Geometrinen sisältö on keskeinen, kun tarvitaan suoraa avaruudellisten suureiden havainnointia, tai kun tarvitaan kuvaus geometrisesti yhtenäiseen dataesitykseen, jossa poikkeavat näytteet tai melu voidaan helposti erottaa. Tässä työssä tarkastellaan datan muuntamista geometriseen piirreavaruuteen kahden esimerkkiohjelman suhteen. Ensimmäinen esimerkki on pintakaarevuus, joka on uudelleen virinneen kiinnostuksen kohde kaikkialle saatavissa olevan datan ja diskreetin geometrian kypsymisen takia. Kaarevuusspektrit voivat luonnehtia avaruudellista kohdetta melko hyvin ja tarjota koneoppimisessa hyödyllisiä piirteitä. Toinen esimerkki koskee projektiivisia menetelmiä käytettäessä stereovideosignaalia uinnin analytiikkaan. Tavoite on löytää merkityksellisiä paikallisen geometrian esityksiä, jotka samalla mahdollistavat muun geometrian ymmärrykseen perustuvan analyysin. Piirteet liittyvät suoraan johonkin geometriseen suureeseen, ja tämä helpottaa luonnollisella tavalla geometristen rajoitteiden käsittelyä, kuten väitöstyössä osoitetaan. Myös visualisointi ja lisäpiirteiden luonti muuttuu helpommaksi. Kolmanneksi, lähestymistapa suo selkeän vertailumenetelmän perinteisemmille koneoppimisen lähestymistavoille, esim. hermoverkkomenetelmille. Neljänneksi, useimmat koneoppimismenetelmät voivat hyödyntää tässä työssä esitettyjä geometrisia piirteitä lisäämällä ne muiden piirteiden joukkoon

    Automatic vehicle detection and tracking in aerial video

    Get PDF
    This thesis is concerned with the challenging tasks of automatic and real-time vehicle detection and tracking from aerial video. The aim of this thesis is to build an automatic system that can accurately localise any vehicles that appear in aerial video frames and track the target vehicles with trackers. Vehicle detection and tracking have many applications and this has been an active area of research during recent years; however, it is still a challenge to deal with certain realistic environments. This thesis develops vehicle detection and tracking algorithms which enhance the robustness of detection and tracking beyond the existing approaches. The basis of the vehicle detection system proposed in this thesis has different object categorisation approaches, with colour and texture features in both point and area template forms. The thesis also proposes a novel Self-Learning Tracking and Detection approach, which is an extension to the existing Tracking Learning Detection (TLD) algorithm. There are a number of challenges in vehicle detection and tracking. The most difficult challenge of detection is distinguishing and clustering the target vehicle from the background objects and noises. Under certain conditions, the images captured from Unmanned Aerial Vehicles (UAVs) are also blurred; for example, turbulence may make the vehicle shake during flight. This thesis tackles these challenges by applying integrated multiple feature descriptors for real-time processing. In this thesis, three vehicle detection approaches are proposed: the HSV-GLCM feature approach, the ISM-SIFT feature approach and the FAST-HoG approach. The general vehicle detection approaches used have highly flexible implicit shape representations. They are based on training samples in both positive and negative sets and use updated classifiers to distinguish the targets. It has been found that the detection results attained by using HSV-GLCM texture features can be affected by blurring problems; the proposed detection algorithms can further segment the edges of the vehicles from the background. Using the point descriptor feature can solve the blurring problem, however, the large amount of information contained in point descriptors can lead to processing times that are too long for real-time applications. So the FAST-HoG approach combining the point feature and the shape feature is proposed. This new approach is able to speed up the process that attains the real-time performance. Finally, a detection approach using HoG with the FAST feature is also proposed. The HoG approach is widely used in object recognition, as it has a strong ability to represent the shape vector of the object. However, the original HoG feature is sensitive to the orientation of the target; this method improves the algorithm by inserting the direction vectors of the targets. For the tracking process, a novel tracking approach was proposed, an extension of the TLD algorithm, in order to track multiple targets. The extended approach upgrades the original system, which can only track a single target, which must be selected before the detection and tracking process. The greatest challenge to vehicle tracking is long-term tracking. The target object can change its appearance during the process and illumination and scale changes can also occur. The original TLD feature assumed that tracking can make errors during the tracking process, and the accumulation of these errors could cause tracking failure, so the original TLD proposed using a learning approach in between the tracking and the detection by adding a pair of inspectors (positive and negative) to constantly estimate errors. This thesis extends the TLD approach with a new detection method in order to achieve multiple-target tracking. A Forward and Backward Tracking approach has been proposed to eliminate tracking errors and other problems such as occlusion. The main purpose of the proposed tracking system is to learn the features of the targets during tracking and re-train the detection classifier for further processes. This thesis puts particular emphasis on vehicle detection and tracking in different extreme scenarios such as crowed highway vehicle detection, blurred images and changes in the appearance of the targets. Compared with currently existing detection and tracking approaches, the proposed approaches demonstrate a robust increase in accuracy in each scenario

    Automated taxiing for unmanned aircraft systems

    Get PDF
    Over the last few years, the concept of civil Unmanned Aircraft System(s) (UAS) has been realised, with small UASs commonly used in industries such as law enforcement, agriculture and mapping. With increased development in other areas, such as logistics and advertisement, the size and range of civil UAS is likely to grow. Taken to the logical conclusion, it is likely that large scale UAS will be operating in civil airspace within the next decade. Although the airborne operations of civil UAS have already gathered much research attention, work is also required to determine how UAS will function when on the ground. Motivated by the assumption that large UAS will share ground facilities with manned aircraft, this thesis describes the preliminary development of an Automated Taxiing System(ATS) for UAS operating at civil aerodromes. To allow the ATS to function on the majority of UAS without the need for additional hardware, a visual sensing approach has been chosen, with the majority of work focusing on monocular image processing techniques. The purpose of the computer vision system is to provide direct sensor data which can be used to validate the vehicle s position, in addition to detecting potential collision risks. As aerospace regulations require the most robust and reliable algorithms for control, any methods which are not fully definable or explainable will not be suitable for real-world use. Therefore, non-deterministic methods and algorithms with hidden components (such as Artificial Neural Network (ANN)) have not been used. Instead, the visual sensing is achieved through a semantic segmentation, with separate segmentation and classification stages. Segmentation is performed using superpixels and reachability clustering to divide the image into single content clusters. Each cluster is then classified using multiple types of image data, probabilistically fused within a Bayesian network. The data set for testing has been provided by BAE Systems, allowing the system to be trained and tested on real-world aerodrome data. The system has demonstrated good performance on this limited dataset, accurately detecting both collision risks and terrain features for use in navigation

    Contribution à la localisation de véhicules intelligents à partir de marquage routier

    Get PDF
    Autonomous Vehicles (AV) applications and Advanced Driving Assistance Systems (ADAS) relay in scene understanding processes allowing high level systems to carry out decision marking. For such systems, the localization of a vehicle evolving in a structured dynamic environment constitutes a complex problem of crucial importance. Our research addresses scene structure detection, localization and error modeling. Taking into account the large functional spectrum of vision systems, the accessibility of Open Geographical Information Systems (GIS) and the widely presence of Global Positioning Systems (GPS) onboard vehicles, we study the performance and the reliability of a vehicle localization method combining such information sources. Monocular vision–based lane marking detection provides key information about the scene structure. Using an enhanced multi-kernel framework with hierarchical weights, the proposed parametric method performs, in real time, the detection and tracking of the ego-lane marking. A self-assessment indicator quantifies the confidence of this information source. We conduct our investigations in a localization system which tightly couples GPS, GIS and lane makings in the probabilistic framework of Particle Filter (PF). To this end, it is proposed the use of lane markings not only during the map-matching process but also to model the expected ego-vehicle motion. The reliability of the localization system, in presence of unusual errors from the different information sources, is enhanced by taking into account different confidence indicators. Such a mechanism is later employed to identify error sources. This research concludes with an experimental validation in real driving situations of the proposed methods. They were tested and its performance was quantified using an experimental vehicle and publicly available datasets.Les applications pour véhicules autonomes et les systèmes d’aide avancée à la conduite (Advanced Driving Assistance Systems - ADAS) mettent en oeuvre des processus permettant à des systèmes haut niveau de réaliser une prise de décision. Pour de tels systèmes, la connaissance du positionnement précis (ou localisation) du véhicule dans son environnement est un pré-requis nécessaire. Cette thèse s’intéresse à la détection de la structure de scène, au processus de localisation ainsi qu’à la modélisation d’erreurs. A partir d’un large spectre fonctionnel de systèmes de vision, de l’accessibilité d’un système de cartographie ouvert (Open Geographical Information Systems - GIS) et de la large diffusion des systèmes de positionnement dans les véhicules (Global Positioning System - GPS), cette thèse étudie la performance et la fiabilité d’une méthode de localisation utilisant ces différentes sources. La détection de marquage sur la route réalisée par caméra monoculaire est le point de départ permettant de connaître la structure de la scène. En utilisant, une détection multi-noyau avec pondération hiérarchique, la méthode paramétrique proposée effectue la détection et le suivi des marquages sur la voie du véhicule en temps réel. La confiance en cette source d’information a été quantifiée par un indicateur de vraisemblance. Nous proposons ensuite un système de localisation qui fusionne des informations de positionnement (GPS), la carte (GIS) et les marquages détectés précédemment dans un cadre probabiliste basé sur un filtre particulaire. Pour ce faire, nous proposons d’utiliser les marquages détectés non seulement dans l’étape de mise en correspondance des cartes mais aussi dans la modélisation de la trajectoire attendue du véhicule. La fiabilité du système de localisation, en présence d’erreurs inhabituelles dans les différentes sources d’information, est améliorée par la prise en compte de différents indicateurs de confiance. Ce mécanisme est par la suite utilisé pour identifier les sources d’erreur. Cette thèse se conclut par une validation expérimentale des méthodes proposées dans des situations réelles de conduite. Leurs performances ont été quantifiées en utilisant un véhicule expérimental et des données en libre accès sur internet

    Development of artificial neural network-based object detection algorithms for low-cost hardware devices

    Get PDF
    Finally, the fourth work was published in the “WCCI” conference in 2020 and consisted of an individuals' position estimation algorithm based on a novel neural network model for environments with forbidden regions, named “Forbidden Regions Growing Neural Gas”.The human brain is the most complex, powerful and versatile learning machine ever known. Consequently, many scientists of various disciplines are fascinated by its structures and information processing methods. Due to the quality and quantity of the information extracted from the sense of sight, image is one of the main information channels used by humans. However, the massive amount of video footage generated nowadays makes it difficult to process those data fast enough manually. Thus, computer vision systems represent a fundamental tool in the extraction of information from digital images, as well as a major challenge for scientists and engineers. This thesis' primary objective is automatic foreground object detection and classification through digital image analysis, using artificial neural network-based techniques, specifically designed and optimised to be deployed in low-cost hardware devices. This objective will be complemented by developing individuals' movement estimation methods by using unsupervised learning and artificial neural network-based models. The cited objectives have been addressed through a research work illustrated in four publications supporting this thesis. The first one was published in the “ICAE” journal in 2018 and consists of a neural network-based movement detection system for Pan-Tilt-Zoom (PTZ) cameras deployed in a Raspberry Pi board. The second one was published in the “WCCI” conference in 2018 and consists of a deep learning-based automatic video surveillance system for PTZ cameras deployed in low-cost hardware. The third one was published in the “ICAE” journal in 2020 and consists of an anomalous foreground object detection and classification system for panoramic cameras, based on deep learning and supported by low-cost hardware

    Large-Scale Textured 3D Scene Reconstruction

    Get PDF
    Die Erstellung dreidimensionaler Umgebungsmodelle ist eine fundamentale Aufgabe im Bereich des maschinellen Sehens. Rekonstruktionen sind für eine Reihe von Anwendungen von Nutzen, wie bei der Vermessung, dem Erhalt von Kulturgütern oder der Erstellung virtueller Welten in der Unterhaltungsindustrie. Im Bereich des automatischen Fahrens helfen sie bei der Bewältigung einer Vielzahl an Herausforderungen. Dazu gehören Lokalisierung, das Annotieren großer Datensätze oder die vollautomatische Erstellung von Simulationsszenarien. Die Herausforderung bei der 3D Rekonstruktion ist die gemeinsame Schätzung von Sensorposen und einem Umgebunsmodell. Redundante und potenziell fehlerbehaftete Messungen verschiedener Sensoren müssen in eine gemeinsame Repräsentation der Welt integriert werden, um ein metrisch und photometrisch korrektes Modell zu erhalten. Gleichzeitig muss die Methode effizient Ressourcen nutzen, um Laufzeiten zu erreichen, welche die praktische Nutzung ermöglichen. In dieser Arbeit stellen wir ein Verfahren zur Rekonstruktion vor, das fähig ist, photorealistische 3D Rekonstruktionen großer Areale zu erstellen, die sich über mehrere Kilometer erstrecken. Entfernungsmessungen aus Laserscannern und Stereokamerasystemen werden zusammen mit Hilfe eines volumetrischen Rekonstruktionsverfahrens fusioniert. Ringschlüsse werden erkannt und als zusätzliche Bedingungen eingebracht, um eine global konsistente Karte zu erhalten. Das resultierende Gitternetz wird aus Kamerabildern texturiert, wobei die einzelnen Beobachtungen mit ihrer Güte gewichtet werden. Für eine nahtlose Erscheinung werden die unbekannten Belichtungszeiten und Parameter des optischen Systems mitgeschätzt und die Bilder entsprechend korrigiert. Wir evaluieren unsere Methode auf synthetischen Daten, realen Sensordaten unseres Versuchsfahrzeugs und öffentlich verfügbaren Datensätzen. Wir zeigen qualitative Ergebnisse großer innerstädtischer Bereiche, sowie quantitative Auswertungen der Fahrzeugtrajektorie und der Rekonstruktionsqualität. Zuletzt präsentieren wir mehrere Anwendungen und zeigen somit den Nutzen unserer Methode für Anwendungen im Bereich des automatischen Fahrens
    corecore