1,545 research outputs found
Recommended from our members
AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer.
Multi-modal sensors are the key to ensuring the robust and accurate operation of autonomous driving systems, where LiDAR and cameras are important on-board sensors. However, current fusion methods face challenges due to inconsistent multi-sensor data representations and the misalignment of dynamic scenes. Specifically, current fusion methods either explicitly correlate multi-sensor data features by calibrating parameters, ignoring the feature blurring problems caused by misalignment, or find correlated features between multi-sensor data through global attention, causing rapidly escalating computational costs. On this basis, we propose a transformer-based end-to-end multi-sensor fusion framework named the adaptive fusion transformer (AFTR). The proposed AFTR consists of the adaptive spatial cross-attention (ASCA) mechanism and the spatial temporal self-attention (STSA) mechanism. Specifically, ASCA adaptively associates and interacts with multi-sensor data features in 3D space through learnable local attention, alleviating the problem of the misalignment of geometric information and reducing computational costs, and STSA interacts with cross-temporal information using learnable offsets in deformable attention, mitigating displacements due to dynamic scenes. We show through numerous experiments that the AFTR obtains SOTA performance in the nuScenes 3D object detection task (74.9% NDS and 73.2% mAP) and demonstrates strong robustness to misalignment (only a 0.2% NDS drop with slight noise). At the same time, we demonstrate the effectiveness of the AFTR components through ablation studies. In summary, the proposed AFTR is an accurate, efficient, and robust multi-sensor data fusion framework
Non-learning Stereo-aided Depth Completion under Mis-projection via Selective Stereo Matching
We propose a non-learning depth completion method for a sparse depth map
captured using a light detection and ranging (LiDAR) sensor guided by a pair of
stereo images. Generally, conventional stereo-aided depth completion methods
have two limiations. (i) They assume the given sparse depth map is accurately
aligned to the input image, whereas the alignment is difficult to achieve in
practice. (ii) They have limited accuracy in the long range because the depth
is estimated by pixel disparity. To solve the abovementioned limitations, we
propose selective stereo matching (SSM) that searches the most appropriate
depth value for each image pixel from its neighborly projected LiDAR points
based on an energy minimization framework. This depth selection approach can
handle any type of mis-projection. Moreover, SSM has an advantage in terms of
long-range depth accuracy because it directly uses the LiDAR measurement rather
than the depth acquired from the stereo. SSM is a discrete process; thus, we
apply variational smoothing with binary anisotropic diffusion tensor (B-ADT) to
generate a continuous depth map while preserving depth discontinuity across
object boundaries. Experimentally, compared with the previous state-of-the-art
stereo-aided depth completion, the proposed method reduced the mean absolute
error (MAE) of the depth estimation to 0.65 times and demonstrated
approximately twice more accurate estimation in the long range. Moreover, under
various LiDAR-camera calibration errors, the proposed method reduced the depth
estimation MAE to 0.34-0.93 times from previous depth completion methods.Comment: 15 pages, 13 figure
Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles
Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE
O<sub>2</sub>SAT: Object-Oriented-Segmentation-Guided Spatial-Attention Network for 3D Object Detection in Autonomous Vehicles
Autonomous vehicles (AVs) strive to adapt to the specific characteristics of sustainable urban environments. Accurate 3D object detection with LiDAR is paramount for autonomous driving. However, existing research predominantly relies on the 3D object-based assumption, which overlooks the complexity of real-world road environments. Consequently, current methods experience performance degradation when targeting only local features and overlooking the intersection of objects and road features, especially in uneven road conditions. This study proposes a 3D Object-Oriented-Segmentation Spatial-Attention (O2SAT) approach to distinguish object points from road points and enhance the keypoint feature learning by a channel-wise spatial attention mechanism. O2SAT consists of three modules: Object-Oriented Segmentation (OOS), Spatial-Attention Feature Reweighting (SFR), and Road-Aware 3D Detection Head (R3D). OOS distinguishes object and road points and performs object-aware downsampling to augment data by learning to identify the hidden connection between landscape and object; SFR performs weight augmentation to learn crucial neighboring relationships and dynamically adjust feature weights through spatial attention mechanisms, which enhances the long-range interactions and contextual feature discrimination for noise suppression, improving overall detection performance; and R3D utilizes refined object segmentation and optimized feature representations. Our system forecasts prediction confidence into existing point-backbones. Our method’s effectiveness and robustness across diverse datasets (KITTI) has been demonstrated through vast experiments. The proposed modules seamlessly integrate into existing point-based frameworks, following a plug-and-play approach
- …