23,658 research outputs found

    PointPillars Backbone Type Selection For Fast and Accurate LiDAR Object Detection

    Full text link
    3D object detection from LiDAR sensor data is an important topic in the context of autonomous cars and drones. In this paper, we present the results of experiments on the impact of backbone selection of a deep convolutional neural network on detection accuracy and computation speed. We chose the PointPillars network, which is characterised by a simple architecture, high speed, and modularity that allows for easy expansion. During the experiments, we paid particular attention to the change in detection efficiency (measured by the mAP metric) and the total number of multiply-addition operations needed to process one point cloud. We tested 10 different convolutional neural network architectures that are widely used in image-based detection problems. For a backbone like MobilenetV1, we obtained an almost 4x speedup at the cost of a 1.13% decrease in mAP. On the other hand, for CSPDarknet we got an acceleration of more than 1.5x at an increase in mAP of 0.33%. We have thus demonstrated that it is possible to significantly speed up a 3D object detector in LiDAR point clouds with a small decrease in detection efficiency. This result can be used when PointPillars or similar algorithms are implemented in embedded systems, including SoC FPGAs. The code is available at https://github.com/vision-agh/pointpillars\_backbone.Comment: Accepted for the ICCVG 2022 conferenc

    ꡬ쑰 κ°μ‘ν˜• 데이터 증강 기법과 ν˜Όν•© 밀도 신경망을 μ΄μš©ν•œ 라이닀 기반 3차원 객체 κ²€μΆœ κ°œμ„ 

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : μœ΅ν•©κ³Όν•™κΈ°μˆ λŒ€ν•™μ› μœ΅ν•©κ³Όν•™λΆ€, 2023. 2. κ³½λ…Έμ€€.μžμœ¨μ£Όν–‰μžλ™μ°¨, λ‘œλ΄‡μ˜ 인식 μž₯λΉ„λ‘œ 많이 ν™œμš©λ˜κ³ μžˆλŠ” 라이닀 (LiDAR) λŠ” λ ˆμ΄μ € νŽ„μŠ€λ₯Ό λ°©μΆœν•˜μ—¬ λ˜λŒμ•„μ˜€λŠ” μ‹œκ°„μ„ κ³„μ‚°ν•˜μ—¬ 포인트 ν΄λΌμš°λ“œ (point cloud) ν˜•νƒœλ‘œ μ£Όλ³€ ν™˜κ²½μ„ κ°μ§€ν•œλ‹€. μ£Όλ³€ ν™˜κ²½μ„ κ°μ§€ν• λ•Œ κ°€μž₯ 쀑 μš”ν•œ 뢀뢄은 κ·Όμ²˜μ— μ–΄λ–€ 객체가 μžˆλŠ”μ§€, 어디에 μœ„μΉ˜ν•΄ μžˆλŠ”μ§€λ₯Ό μΈμ‹ν•˜λŠ” 것이고 μ΄λŸ¬ν•œ μž‘μ—…μ„ μˆ˜ν–‰ν•˜κΈ° μœ„ν•΄ 포인트 ν΄λΌμš°λ“œλ₯Ό ν™œμš©ν•˜λŠ” 3차원 객 체 κ²€μΆœ κΈ°μˆ λ“€μ΄ 많이 μ—°κ΅¬λ˜κ³  μžˆλ‹€. 포인트 ν΄λΌμš°λ“œ λ°μ΄ν„°μ˜ μ „μ²˜λ¦¬ 방법에 따라 맀우 λ‹€μ–‘ν•œ ꡬ쑰의 λ°±λ³Έ λ„€νŠΈμ›Œν¬ (backbone network) κ°€ μ—°κ΅¬λ˜κ³  μžˆλ‹€. κ³ λ„ν™”λœ λ°±λ³Έ λ„€νŠΈμ›Œν¬λ“€λ‘œ 인해 인식 μ„±λŠ₯에 큰 λ°œμ „μ„ μ΄λ£¨μ—ˆμ§€λ§Œ, μ΄λ“€μ˜ ν˜•νƒœκ°€ 크게 λ‹€λ₯΄κΈ° λ•Œλ¬Έμ— μ„œλ‘œ ν˜Έν™˜μ„±μ΄ λΆ€μ‘±ν•˜μ—¬ μ—°κ΅¬λ“€μ˜ κ°ˆλž˜κ°€ 많이 λ‚˜λˆ„μ–΄μ§€κ³  μžˆλ‹€. λ³Έ 논문에 μ„œ ν’€κ³ μžν•˜λŠ” λ¬Έμ œλŠ” νŒŒνŽΈν™”λœ λ°±λ³Έ λ„€νŠΈμ›Œν¬μ˜ ꡬ쑰듀에 ꡬ애받지 μ•Šκ³  3차원 객체 κ²€μΆœκΈ°μ˜ μ„±λŠ₯을 ν–₯μƒμ‹œν‚¬ 방법이 μžˆλŠ”κ°€ 이닀. 이λ₯Ό μœ„ν•΄ λ³Έ λ…Όλ¬Έ μ—μ„œλŠ” 포인트 ν΄λΌμš°λ“œ 데이터 기반의 3차원 객체 κ²€μΆœ κΈ°μˆ μ„ ν–₯μƒμ‹œν‚€λŠ” 두 가지 방법을 μ œμ•ˆν•œλ‹€. 첫 λ²ˆμ§ΈλŠ” 3차원 경계 μƒμž (3D bounding box) 의 ꡬ쑰적인 μ •λ³΄μ˜ ν™œμš©μ„ μ΅œλŒ€ν™”ν•˜λŠ” ꡬ쑰 κ°μ‘ν˜• 데이터 증강 (PA-AUG) 기법이닀. 3차원 경계 μƒμž 라벨은 객체에 λ”± 맞게 μƒμ„±λ˜κ³  λ°©ν–₯값을 ν¬ν•¨ν•˜κΈ° λ•Œλ¬Έμ— μƒμž 내에 객체의 ꡬ쑰 정보λ₯Ό ν¬ν•¨ν•˜κ³  μžˆλ‹€. 이λ₯Ό ν™œμš©ν•˜κΈ° μœ„ν•΄ μš°λ¦¬λŠ” 3차원 경계 μƒμžλ₯Ό ꡬ쑰 κ°μ‘ν˜• νŒŒν‹°μ…˜μœΌλ‘œ κ΅¬λΆ„ν•˜λŠ” 방식을 μ œμ•ˆν•˜κ³ , νŒŒν‹°μ…˜ μˆ˜μ€€μ—μ„œ μˆ˜ν–‰λ˜λŠ” μƒˆλ‘œμš΄ λ°©μ‹μ˜ 데이터 증강 기법을 μ œμ•ˆν•œλ‹€. PA-AUGλŠ” λ‹€μ–‘ν•œ ν˜•νƒœμ˜ 3차원 객체 κ²€μΆœκΈ°λ“€μ˜ μ„±λŠ₯을 κ°•μΈν•˜κ²Œ λ§Œλ“€μ–΄μ£Όκ³ , ν•™μŠ΅ 데이터λ₯Ό 2.5λ°° 증 κ°•μ‹œν‚€λŠ” 만큼의 인식 μ„±λŠ₯ ν–₯상 효과λ₯Ό 보여쀀닀. 두 λ²ˆμ§ΈλŠ” ν˜Όν•© 밀도 신경망 기반 3차원 객체 κ²€μΆœ (MD3D) 기법이닀. MD3DλŠ” κ°€μš°μ‹œκ°„ ν˜Όν•© λͺ¨λΈ (Gaussian Mixture Model) 을 μ΄μš©ν•΄ 3차원 κ²½ 계 μƒμž νšŒκ·€ 문제λ₯Ό 밀도 예츑 λ°©μ‹μœΌλ‘œ μž¬μ •μ˜ν•œ 기법이닀. μ΄λŸ¬ν•œ 방식은 기쑴의 라벨 ν• λ‹Ήμ‹μ˜ ν•™μŠ΅ 방법듀과 달리 포인트 ν΄λΌμš°λ“œ μ „μ²˜λ¦¬ ν˜•νƒœμ— ꡬ애받지 μ•Šκ³  λ™μΌν•œ ν•™μŠ΅ 방식을 μ μš©ν•  수 μžˆλ‹€. λ˜ν•œ κΈ°μ‘΄ 방식 λŒ€λΉ„ ν•™μŠ΅ 에 ν•„μš”ν•œ ν•˜μ΄νΌ νŒŒλΌλ―Έν„°κ°€ ν˜„μ €νžˆ μ μ–΄μ„œ μ΅œμ ν™”κ°€ μš©μ΄ν•˜μ—¬ 인식 μ„±λŠ₯을 크게 높일 수 μžˆμ„ 뿐만 μ•„λ‹ˆλΌ κ°„λ‹¨ν•œ ꡬ쑰둜 인해 인식 속도도 λΉ¨λΌμ§€κ²Œ λœλ‹€. PA-AUG와 MD3DλŠ” λͺ¨λ‘ λ°±λ³Έ λ„€νŠΈμ›Œν¬ ꡬ쑰에 상관없이 λ‹€μ–‘ν•œ 3차원 객체 κ²€μΆœκΈ°μ— κ³΅ν†΅μ μœΌλ‘œ μ‚¬μš©λ  수 있으며 높은 인식 μ„±λŠ₯ ν–₯상을 보여쀀닀. 뿐만 μ•„λ‹ˆλΌ 두 기법은 κ²€μΆœκΈ°μ˜ μ„œλ‘œ λ‹€λ₯Έ μ˜μ—­μ— μ μš©λ˜λŠ” κΈ°λ²•μ΄λ―€λ‘œ ν•¨κ»˜ λ™μ‹œμ— μ‚¬μš©ν•  수 있고, ν•¨κ»˜ μ‚¬μš©ν–ˆμ„λ•Œ 인식 μ„±λŠ₯이 λ”μš± 크게 ν–₯μƒλœλ‹€.LiDAR (Light Detection And Ranging), which is widely used as a sensing device for autonomous vehicles and robots, emits laser pulses and calculates the return time to sense the surrounding environment in the form of a point cloud. When recognizing the surrounding environment, the most important part is recognizing what objects are nearby and where they are located, and 3D object detection methods using point clouds have been actively studied to perform these tasks. Various backbone networks for point cloud-based 3D object detection have been proposed according to the preprocessing method of point cloud data. Although advanced backbone networks have made great strides in detection performance, they are largely different in structure, so there is a lack of compatibility with each other. The problem to be solved in this dissertation is How to improve the performance of 3D object detectors regardless of their diverse backbone network structures?. This dissertation proposes two general methods to improve point cloud-based 3D object detectors. First, we propose a part-aware data augmentation (PA-AUG) method which maximizes the utilization of structural information of 3D bounding boxes. Since the 3D bounding box labels fit the objects boundaries and include the orientation value, they contain the structural information of the object in the box. To fully utilize the intra-object structural information, we propose a novel partaware partitioning method which separates 3D bounding boxes with characteristic sub-parts. PA-AUG applies newly proposed data augmentation methods at the partition level. It makes various types of 3D object detectors robust and brings the equivalent effect of increasing the train data by about 2.5Γ—. Second, we propose a mixture-density-based 3D object detection (MD3D). MD3D predicts the distribution of 3D bounding boxes using a Gaussian mixture model (GMM). It reformulates the conventional regression methods as a density estimation problem. Thus, unlike conventional target assignment methods, it can be applied to any 3D object detector regardless of the point cloud preprocessing method. In addition, as it requires significantly fewer hyper-parameters compared to existing methods, it is easy to optimize the detection performance. MD3D also increases the detection speed due to its simple structure. Both PA-AUG and MD3D can be applied to any 3D object detector and shows an impressive increase in detection performance. The two proposed methods cover different stages of the object detection pipeline. Thus, they can be used simultaneously, and the experimental results show they have a synergy effect when applied together.1 Introduction 1 1.1 Problem Definition 3 1.2 Challenges 6 1.3 Contributions 8 1.3.1 Part-Aware Data Augmentation (PA-AUG) 8 1.3.2 Mixture-Density-based 3D Object Detection (MD3D) 9 1.3.3 Combination of PA-AUG and MD3D 10 1.4 Outline 10 2 Related Works 11 2.1 Data augmentation for Object Detection 11 2.1.1 2D Data augmentation 11 2.1.2 3D Data augmentation 12 2.2 LiDAR-based 3D Object Detection 13 2.3 Mixture Density Networks in Computer Vision 15 2.4 Datasets 16 2.4.1 KITTI Dataset 16 2.4.2 Waymo Open Dataset 18 2.5 Evaluation metric 19 2.5.1 Average Precision (AP) 19 2.5.2 Average Orientation Similarity (AOS) 22 2.5.3 Average Precision weighted by Heading (APH) 22 3 Part-Aware Data Augmentation (PA-AUG) 24 3.1 Introduction 24 3.2 Methods 27 3.2.1 Part-Aware Partitioning 27 3.2.2 Part-Aware Data Augmentation 28 3.3 Experiments 33 3.3.1 Results on the KITTI Dataset 33 3.3.2 Robustness Test 36 3.3.3 Data Efficiency Test 38 3.3.4 Ablation Study 40 3.4 Discussion 41 3.5 Conclusion 42 4 Mixture-Density-based 3D Object Detection (MD3D) 43 4.1 Introduction 43 4.2 Methods 47 4.2.1 Modeling Point-cloud-based 3D Object Detection with Mixture Density Network 47 4.2.2 Network Architecture 49 4.2.3 Loss function 52 4.3 Experiments 53 4.3.1 Datasets 53 4.3.2 Experiment Settings 53 4.3.3 Results on the KITTI Dataset 54 4.3.4 Latency of Each Module 56 4.3.5 Results on the Waymo Open Dataset 58 4.3.6 Analyzing Recall by object size 59 4.3.7 Ablation Study 60 4.3.8 Discussion 65 4.4 Conclusion 66 5 Combination of PA-AUG and MD3D 71 5.1 Methods 71 5.2 Experiments 72 5.2.1 Settings 72 5.2.2 Results on the KITTI Dataset 73 5.3 Discussion 76 6 Conclusion 77 6.1 Summary 77 6.2 Limitations and Future works 78 6.2.1 Hyper-parameter-free PA-AUG 78 6.2.2 Redefinition of Part-aware Partitioning 79 6.2.3 Application to other tasks 79 Abstract (In Korean) 94 κ°μ‚¬μ˜ κΈ€ 96λ°•

    MVFAN: Multi-View Feature Assisted Network for 4D Radar Object Detection

    Full text link
    4D radar is recognized for its resilience and cost-effectiveness under adverse weather conditions, thus playing a pivotal role in autonomous driving. While cameras and LiDAR are typically the primary sensors used in perception modules for autonomous vehicles, radar serves as a valuable supplementary sensor. Unlike LiDAR and cameras, radar remains unimpaired by harsh weather conditions, thereby offering a dependable alternative in challenging environments. Developing radar-based 3D object detection not only augments the competency of autonomous vehicles but also provides economic benefits. In response, we propose the Multi-View Feature Assisted Network (\textit{MVFAN}), an end-to-end, anchor-free, and single-stage framework for 4D-radar-based 3D object detection for autonomous vehicles. We tackle the issue of insufficient feature utilization by introducing a novel Position Map Generation module to enhance feature learning by reweighing foreground and background points, and their features, considering the irregular distribution of radar point clouds. Additionally, we propose a pioneering backbone, the Radar Feature Assisted backbone, explicitly crafted to fully exploit the valuable Doppler velocity and reflectivity data provided by the 4D radar sensor. Comprehensive experiments and ablation studies carried out on Astyx and VoD datasets attest to the efficacy of our framework. The incorporation of Doppler velocity and RCS reflectivity dramatically improves the detection performance for small moving objects such as pedestrians and cyclists. Consequently, our approach culminates in a highly optimized 4D-radar-based 3D object detection capability for autonomous driving systems, setting a new standard in the field.Comment: 19 Pages, 7 figures, Accepted by ICONIP 202

    3M3D: Multi-view, Multi-path, Multi-representation for 3D Object Detection

    Full text link
    3D visual perception tasks based on multi-camera images are essential for autonomous driving systems. Latest work in this field performs 3D object detection by leveraging multi-view images as an input and iteratively enhancing object queries (object proposals) by cross-attending multi-view features. However, individual backbone features are not updated with multi-view features and it stays as a mere collection of the output of the single-image backbone network. Therefore we propose 3M3D: A Multi-view, Multi-path, Multi-representation for 3D Object Detection where we update both multi-view features and query features to enhance the representation of the scene in both fine panoramic view and coarse global view. Firstly, we update multi-view features by multi-view axis self-attention. It will incorporate panoramic information in the multi-view features and enhance understanding of the global scene. Secondly, we update multi-view features by self-attention of the ROI (Region of Interest) windows which encodes local finer details in the features. It will help exchange the information not only along the multi-view axis but also along the other spatial dimension. Lastly, we leverage the fact of multi-representation of queries in different domains to further boost the performance. Here we use sparse floating queries along with dense BEV (Bird's Eye View) queries, which are later post-processed to filter duplicate detections. Moreover, we show performance improvements on nuScenes benchmark dataset on top of our baselines
    • …
    corecore