347 research outputs found

    Vehicle localization using landmarks obtained by a lidar mobile mapping system

    Get PDF
    Accurate and reliable localization in extensive outdoor environments will be a key ability of future driver assistance systems and autonomously driving vehicles. Relative localization, using sensors and a pre-mapped environment, will play a crucial role for such systems, because standard global navigation satellite system (GNSS) solutions will not be able to provide the required reliability. However, it is obvious that the environment maps will have to be quite detailed, making it a must to produce them fully automatically. In this paper, a relative localization approach is evaluated for an environment of substantial extent. The pre-mapped environment is obtained using a LIDAR mobile mapping van. From the raw data, landmarks are extracted fully automatically and inserted into a landmark map. Then, in a second campaign, a robotic vehicle is used to traverse the same scene. Landmarks are extracted from the sensor data of this vehicle as well. Using associated landmark pairs and an estimation approach, the positions of the robotic vehicle are obtained. The number of matches and the matching errors are analyzed, and it is shown that localization based on landmarks outperforms the vehicle's standard GNSS solution

    Evaluation of automatically extracted landmarks for future driver assistance systems

    Get PDF
    In the future, vehicles will gather more and more spatial information about their environment, using on-board sensors such as cameras and laser scanners. Using this data, e.g. for localization, requires highly accurate maps with a higher level of detail than provided by today's maps. Producing those maps can only be realized economically if the information is obtained fully automatically. It is our goal to investigate the creation of intermediate level maps containing geo-referenced landmarks, which are suitable for the specific purpose of localization. To evaluate this approach, we acquired a dense laser scan of a 22 km scene, using a mobile mapping system. From this scan, we automatically extracted pole-like structures, such as street and traffic lights, which form our pole database. To assess the accuracy, ground truth was obtained for a selected inner-city junction by a terrestrial survey. In order to evaluate the usefulness of this database for localization purposes, we obtained a second scan, using a robotic vehicle equipped with an automotive-grade laser scanner. We extracted poles from this scan as well and employed a local pole matching algorithm to improve the vehicle's position

    Conditional Adversarial Networks for Multimodal Photo-Realistic Point Cloud Rendering

    Get PDF
    We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.Nutzung von Conditional Generative Adversarial Networks für das multimodale photorealistische Rendering von Punktwolken. Wir untersuchen, ob Conditional Generative Adversarial Networks (C-GANs) für das Rendering von Punktwolken geeignet sind. Zu diesem Zweck haben wir einen Datensatz erstellt, der etwa 150.000 Bildpaare enthält, jedes bestehend aus einem Rendering einer Punktwolke und dem dazugehörigen Kamerabild. Der Datensatz wurde mit unserem Mobile Mapping System aufgezeichnet, wobei die Messkampagnen über ein Jahr verteilt durchgeführt wurden. Unser Modell lernt, ausschließlich auf Basis von Punktwolkendaten realistisch aussehende Bilder vorherzusagen. Wir zeigen, dass wir mit diesem Ansatz Punktwolken ohne die Verwendung von Kamerabildern kolorieren können. Darüber hinaus zeigen wir, dass wir durch die Parametrierung des Aufnahmedatums in der Lage sind, aus identischen Eingabepunktwolken realistisch aussehende Ansichten für verschiedene Jahreszeiten vorherzusagen

    Assessing temporal behavior in lidar point clouds of urban environments

    Get PDF
    Self-driving cars and robots that run autonomously over long periods of time need high-precision and up-to-date models of the changing environment. The main challenge for creating long term maps of dynamic environments is to identify changes and adapt the map continuously. Changes can occur abruptly, gradually, or even periodically. In this work, we investigate how dense mapping data of several epochs can be used to identify the temporal behavior of the environment. This approach anticipates possible future scenarios where a large fleet of vehicles is equipped with sensors which continuously capture the environment. This data is then being sent to a cloud based infrastructure, which aligns all datasets geometrically and subsequently runs scene analysis on it, among these being the analysis for temporal changes of the environment. Our experiments are based on a LiDAR mobile mapping dataset which consists of 150 scan strips (a total of about 1 billion points), which were obtained in multiple epochs. Parts of the scene are covered by up to 28 scan strips. The time difference between the first and last epoch is about one year. In order to process the data, the scan strips are aligned using an overall bundle adjustment, which estimates the surface (about one billion surface element unknowns) as well as 270,000 unknowns for the adjustment of the exterior orientation parameters. After this, the surface misalignment is usually below one centimeter. In the next step, we perform a segmentation of the point clouds using a region growing algorithm. The segmented objects and the aligned data are then used to compute an occupancy grid which is filled by tracing each individual LiDAR ray from the scan head to every point of a segment. As a result, we can assess the behavior of each segment in the scene and remove voxels from temporal objects from the global occupancy grid.DFG/GRK/215

    Learning a Precipitation Indicator from Traffic Speed Variation Patterns

    Get PDF
    It is common sense that traffic participants tend to drive slower under rain or snow conditions, which has been confirmed by many studies in the field of transportation research. When analyzing the relation between precipitation events and traffic speed observations, it was shown that by using extra weather information, road speed prediction models can be improved. Conversely, traffic speed variation patterns of multiple roads may also provide an indirect indication of weather conditions. In this paper, we attempt to learn such a model, which can detect the appearance of precipitation events, using only road speed observations, for the case of New York City. With a seasonal trend decomposition model Prophet, residuals between the observations and the model were used as features to represent the level of anomaly as compared to the normal traffic situation. Based on the timestamps of weather records on sunny days versus rainy or snowy days, features were extracted from traffic data and assigned to the corresponding labels. A binary classifier was then trained on six-month training data and achieved an accuracy of 91.74% when tested on the remaining two-month test data. We show that there is a significant correlation between the precipitation events and speed variation patterns of multiple roads, which can be used to train a binary indicator. This indicator can detect those precipitation events, which have a significant influence on the city traffic. The method has also a great potential to improve the emergency response of cities where massive real-time traffic speed observations are available. © 2020 The Authors. Published by Elsevier B.V

    Gaussian Process Mapping of Uncertain Building Models with GMM as Prior

    Full text link
    Mapping with uncertainty representation is required in many research domains, such as localization and sensor fusion. Although there are many uncertainty explorations in pose estimation of an ego-robot with map information, the quality of the reference maps is often neglected. To avoid the potential problems caused by the errors of maps and a lack of the uncertainty quantification, an adequate uncertainty measure for the maps is required. In this paper, uncertain building models with abstract map surface using Gaussian Process (GP) is proposed to measure the map uncertainty in a probabilistic way. To reduce the redundant computation for simple planar objects, extracted facets from a Gaussian Mixture Model (GMM) are combined with the implicit GP map while local GP-block techniques are used as well. The proposed method is evaluated on LiDAR point clouds of city buildings collected by a mobile mapping system. Compared to the performances of other methods such like Octomap, Gaussian Process Occupancy Map (GPOM) and Bayersian Generalized Kernel Inference (BGKOctomap), our method has achieved higher Precision-Recall AUC for evaluated buildings

    Classification and Change Detection in Mobile Mapping LiDAR Point Clouds

    Get PDF
    Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data

    Cooperative information augmentation in a geosensor network

    Get PDF
    This paper presents a concept for the collaborative distributed acquisition and refinement of geo-related information. The underlying idea is to start with a massive amount of moving sensors which can observe and measure a spatial phenomenon with an unknown, possibly low accuracy. Linking these measurements with a limited number of measuring units with higher order accuracy leads to an information and quality augmentation in the mass sensor data. This is achieved by distributed information integration and processing in a local communication range. The approach will be demonstrated with the example where cars measure rainfall indirectly by the wiper frequencies. The a priori unknown relationship between wiper frequency and rainfall is incrementally determined and refined in the sensor network. For this, neighboring information of both stationary rain gauges of higher accuracy and neighboring cars with their associated measurement accuracy are integrated. In this way, the quality of the measurement units can be enhanced. In the paper the concept for the approach is presented, together with first experiments in a simulation environment. Each sensor is described as an individual agent with certain processing and communication possibilities. The movement of cars is based on given traffic models. Experiments with respect to the dependency of car density, station density and achievable accuracies are presented. Finally, extensions of this approach to other applications are outlined

    Generalization of tiled models with curved surfaces using typification

    Get PDF
    Especially for landmark buildings or in the context of cultural heritage documentation, highly detailed digital models are being created in many places. In some of these models, surfaces are represented by tiles which are individually modeled as solid shapes. In many applications, the high complexity of these models has to be reduced for more x efficient visualization and analysis. In our paper, we introduce an approach to derive versions at different scales from such a model through the generalization method of typification that works for curved underlying surfaces. Using the example of tiles placed on a curved roof - which occur, for example, very frequently in ancient Chinese architecture, the original set of tiles is replaced by fewer but bigger tiles while keeping a similar appearance. In the first step, the distribution of the central points of the tiles is approximated by a spline surface. This is necessary because curved roof surfaces cannot be approximated by planes at large scales. After that, the new set of tiles with less rows and/or columns is distributed along a spline surface generated from a morphing of the original surface towards a plane. The degree of morphing is dependent on the desired target scale. If the surface can be represented as a plane at the given resolution, the tiles may be converted to a bump map or a simple texture for visualization. In the final part, a perception-based method using CSF (contrast sensitivity function) is introduced to determine an appropriate LoD (level of detail) version of the model for a given viewing scenario (point of view and camera properties) at runtime.BMBF/GDI-Grid projectNational Basic Reseach Program of China/2010CB731800National High Technology Research and Development Program of China/2008AA12160
    • …
    corecore