1,509 research outputs found

    PolyMerge: A Novel Technique aimed at Dynamic HD Map Updates Leveraging Polylines

    Full text link
    Currently, High-Definition (HD) maps are a prerequisite for the stable operation of autonomous vehicles. Such maps contain information about all static road objects for the vehicle to consider during navigation, such as road edges, road lanes, crosswalks, and etc. To generate such an HD map, current approaches need to process pre-recorded environment data obtained from onboard sensors. However, recording such a dataset often requires a lot of time and effort. In addition, every time actual road environments are changed, a new dataset should be recorded to generate a relevant HD map. This paper addresses a novel approach that allows to continuously generate or update the HD map using onboard sensor data. When there is no need to pre-record the dataset, updating the HD map can be run in parallel with the main autonomous vehicle navigation pipeline. The proposed approach utilizes the VectorMapNet framework to generate vector road object instances from a sensor data scan. The PolyMerge technique is aimed to merge new instances into previous ones, mitigating detection errors and, therefore, generating or updating the HD map. The performance of the algorithm was confirmed by comparison with ground truth on the NuScenes dataset. Experimental results showed that the mean error for different levels of environment complexity was comparable to the VectorMapNet single instance error.Comment: 6 pages, 9 figure

    VMA: Divide-and-Conquer Vectorized Map Annotation System for Large-Scale Driving Scene

    Full text link
    High-definition (HD) map serves as the essential infrastructure of autonomous driving. In this work, we build up a systematic vectorized map annotation framework (termed VMA) for efficiently generating HD map of large-scale driving scene. We design a divide-and-conquer annotation scheme to solve the spatial extensibility problem of HD map generation, and abstract map elements with a variety of geometric patterns as unified point sequence representation, which can be extended to most map elements in the driving scene. VMA is highly efficient and extensible, requiring negligible human effort, and flexible in terms of spatial scale and element type. We quantitatively and qualitatively validate the annotation performance on real-world urban and highway scenes, as well as NYC Planimetric Database. VMA can significantly improve map generation efficiency and require little human effort. On average VMA takes 160min for annotating a scene with a range of hundreds of meters, and reduces 52.3% of the human cost, showing great application value

    TIME-RELATED QUALITY DIMENSIONS OF URBAN REMOTELY SENSED BIG DATA

    Get PDF
    Abstract. Our rapidly changing world requires new sources of image based information. The quickly changing urban areas, the maintenance and management of smart cities cannot only rely on traditional techniques based on remotely sensed data, but also new and progressive techniques must be involved. Among these technologies the volunteer based solutions are getting higher importance, like crowd-sourced image evaluations, mapping by satellite based positioning techniques or even observations done by unskilled people. Location based intelligence has become an everyday practice of our life. It is quite enough to mention the weather forecast and traffic monitoring applications, where everybody can act as an observer and acquired data – despite their heterogeneity in quality – provide great value. Such value intuitively increases when data are of better quality. In the age of visualization, real-time imaging, big data and crowd-sourced spatial data have revolutionary transformed our general applications. Most important factors of location based decisions are the time-related quality parameters of the used data. In this paper several time-related data quality dimensions and terms are defined. The paper analyses the time sensitive data characteristics of image-based crowd-sourced big data, presents quality challenges and perspectives of the users. The data quality analyses focus not only on the dimensions, but are also extended to quality related elements, metrics. The paper discusses the connection of data acquisition and processing techniques, considering even the big data aspects. The paper contains not only theoretical sections, strong practice-oriented examples on detecting quality problems are also covered. Some illustrative examples are the OpenStreetMap (OSM), where the development of urbanization and the increasing process of involving volunteers can be studied. This framework is continuing the previous activities of the Remote Sensing Data Quality Working Group (ICWGIII/IVb) of the ISPRS in the topic focusing on the temporal variety of our urban environment.</p

    LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning

    Full text link
    We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior, flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior lableled crowd datasets by improving the accuracy of pedestrian detection and crowd behavior classification algorithms. LCrowdV would be released on the WWW
    • …
    corecore