8,994 research outputs found

    A Novel Point Cloud Compression Algorithm for Vehicle Recognition Using Boundary Extraction

    Get PDF
    Recently, research on the hardware system for generating point cloud data through 3D LiDAR scanning has improved, which has important applications in autonomous driving and 3D reconstruction. However, point cloud data may contain defects such as duplicate points, redundant points, and an unordered mass of points, which put higher demands on the performance of hardware systems for processing data. Simplifying and compressing point cloud data can improve recognition speed in subsequent processes. This paper studies a novel algorithm for identifying vehicles in the environment using 3D LiDAR to obtain point cloud data. The point cloud compression method based on the nearest neighbor point and boundary extraction from octree voxels center points is applied to the point cloud data, followed by the vehicle point cloud identification algorithm based on image mapping for vehicle recognition. The proposed algorithm is tested using the KITTI dataset, and the results show improved accuracy compared to other methods

    Circular Accessible Depth: A Robust Traversability Representation for UGV Navigation

    Full text link
    In this paper, we present the Circular Accessible Depth (CAD), a robust traversability representation for an unmanned ground vehicle (UGV) to learn traversability in various scenarios containing irregular obstacles. To predict CAD, we propose a neural network, namely CADNet, with an attention-based multi-frame point cloud fusion module, Stability-Attention Module (SAM), to encode the spatial features from point clouds captured by LiDAR. CAD is designed based on the polar coordinate system and focuses on predicting the border of traversable area. Since it encodes the spatial information of the surrounding environment, which enables a semi-supervised learning for the CADNet, and thus desirably avoids annotating a large amount of data. Extensive experiments demonstrate that CAD outperforms baselines in terms of robustness and precision. We also implement our method on a real UGV and show that it performs well in real-world scenarios.Comment: 13 pages, 8 figure

    3D-SeqMOS: A Novel Sequential 3D Moving Object Segmentation in Autonomous Driving

    Full text link
    For the SLAM system in robotics and autonomous driving, the accuracy of front-end odometry and back-end loop-closure detection determine the whole intelligent system performance. But the LiDAR-SLAM could be disturbed by current scene moving objects, resulting in drift errors and even loop-closure failure. Thus, the ability to detect and segment moving objects is essential for high-precision positioning and building a consistent map. In this paper, we address the problem of moving object segmentation from 3D LiDAR scans to improve the odometry and loop-closure accuracy of SLAM. We propose a novel 3D Sequential Moving-Object-Segmentation (3D-SeqMOS) method that can accurately segment the scene into moving and static objects, such as moving and static cars. Different from the existing projected-image method, we process the raw 3D point cloud and build a 3D convolution neural network for MOS task. In addition, to make full use of the spatio-temporal information of point cloud, we propose a point cloud residual mechanism using the spatial features of current scan and the temporal features of previous residual scans. Besides, we build a complete SLAM framework to verify the effectiveness and accuracy of 3D-SeqMOS. Experiments on SemanticKITTI dataset show that our proposed 3D-SeqMOS method can effectively detect moving objects and improve the accuracy of LiDAR odometry and loop-closure detection. The test results show our 3D-SeqMOS outperforms the state-of-the-art method by 12.4%. We extend the proposed method to the SemanticKITTI: Moving Object Segmentation competition and achieve the 2nd in the leaderboard, showing its effectiveness

    Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

    Full text link
    Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.Comment: 10 pages in Conference on Computer Vision and Pattern Recognition (CVPR), 201

    LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving

    Full text link
    LiDAR is crucial for robust 3D scene perception in autonomous driving. LiDAR perception has the largest body of literature after camera perception. However, multi-task learning across tasks like detection, segmentation, and motion estimation using LiDAR remains relatively unexplored, especially on automotive-grade embedded platforms. We present a real-time multi-task convolutional neural network for LiDAR-based object detection, semantics, and motion segmentation. The unified architecture comprises a shared encoder and task-specific decoders, enabling joint representation learning. We propose a novel Semantic Weighting and Guidance (SWAG) module to transfer semantic features for improved object detection selectively. Our heterogeneous training scheme combines diverse datasets and exploits complementary cues between tasks. The work provides the first embedded implementation unifying these key perception tasks from LiDAR point clouds achieving 3ms latency on the embedded NVIDIA Xavier platform. We achieve state-of-the-art results for two tasks, semantic and motion segmentation, and close to state-of-the-art performance for 3D object detection. By maximizing hardware efficiency and leveraging multi-task synergies, our method delivers an accurate and efficient solution tailored for real-world automated driving deployment. Qualitative results can be seen at https://youtu.be/H-hWRzv2lIY

    Safety check in critical safety scenario for self-driving vehicles

    Get PDF
    En un Mercado emergente como es el de los coches autonómos una de las funciones esenciales es asegurar la seguridad del funcionamiento de dichos sistemas. La mayoría de los comportamientos del estado del arte son capaces de manejarse en un escenario sin anomalías. Sin embargo, las dinámicas del entorno, tales como las condiciones metereológicas o las oclusiones de los sensores, pueden comprometer el funcionamiento de estos. Para ampliar los escenarios en los cuales estos sistemas son capaces de funcionar, es necesario incluir nuevas funciones de seguirdad para una conducción segura en cualquier entorno. Esta contribución demuestra una validación para vehículos autónomos basada en las condiciones del entorno. Se propone una comprobación de los obstaculos y limitaciones de los sensores. Para ellos se define una Region de Interes (RoI). Combinando ambos conceptos se obtiene un valor cuantitativo del conocimiento de los entornos del sistema. La idea propuesta se basa en modificar el plan de actuación según dicho valor, mejorando el tiempo de reacción ante situaciones imprevistas. Los resultados de la simulación e implementación física en el coche autónomo muestran una mejora en los tiempos de reacción ante situaciones fuera del dominio operacional designado. Se considera que este proyecto resuelve una de las condiciones obligatorias para conseguir un coche autónomo con un nivel de automatización de nivel cuatro.Fully autonomous vehicles must guarantee safety. Most of state of the art behaviors can drive safely on scenarios with no anomalies. However, Dynamics, such as weather conditions or occlusions, on the operational design domain might comprise the security. For further automation we need to enlarge the workbench for the technology allowing to work safely even on those situations. We contribute with a safety validation for AV based on the conditions of the scenario. We propose a check for sensor visibility and limitations. Additionally we create a definition of a Region of Interest (RoI). Merging both data we obtain a quantitative value for environment awareness. The proposed idea is to, based on that value, modify the acting plan, improving reaction time for unforeseen. The results from simulation shows that using the proposed idea, dangerous situations can be avoided. Henceforth, the fulfillment of the derived safety assessment validation can guarantee safety of the AV. The proposed idea is to, based on that validation, modify the acting plan, improving reaction time for unforeseen and endowing the autonomous vehicle of a safety check. This update is mandatory for self driving vehicles that long to achieve a level 4 automation, as the human is no longer the responsible for safety check in unpredictable situations.Universidad de Sevilla. Máster en Ingeniería Electrónica, Robótica y Automátic

    Augmenting CCAM Infrastructure for Creating Smart Roads and Enabling Autonomous Driving

    Get PDF
    Autonomous vehicles and smart roads are not new concepts and the undergoing development to empower the vehicles for higher levels of automation has achieved initial milestones. However, the transportation industry and relevant research communities still require making considerable efforts to create smart and intelligent roads for autonomous driving. To achieve the results of such efforts, the CCAM infrastructure is a game changer and plays a key role in achieving higher levels of autonomous driving. In this paper, we present a smart infrastructure and autonomous driving capabilities enhanced by CCAM infrastructure. Meaning thereby, we lay down the technical requirements of the CCAM infrastructure: identify the right set of the sensory infrastructure, their interfacing, integration platform, and necessary communication interfaces to be interconnected with upstream and downstream solution components. Then, we parameterize the road and network infrastructures (and automated vehicles) to be advanced and evaluated during the research work, under the very distinct scenarios and conditions. For validation, we demonstrate the machine learning algorithms in mobility applications such as traffic flow and mobile communication demands. Consequently, we train multiple linear regression models and achieve accuracy of over 94% for predicting aforementioned demands on a daily basis. This research therefore equips the readers with relevant technical information required for enhancing CCAM infrastructure. It also encourages and guides the relevant research communities to implement the CCAM infrastructure towards creating smart and intelligent roads for autonomous driving
    corecore