7 research outputs found
Radar-STDA: A High-Performance Spatial-Temporal Denoising Autoencoder for Interference Mitigation of FMCW Radars
With its small size, low cost and all-weather operation, millimeter-wave
radar can accurately measure the distance, azimuth and radial velocity of a
target compared to other traffic sensors. However, in practice, millimeter-wave
radars are plagued by various interferences, leading to a drop in target
detection accuracy or even failure to detect targets. This is undesirable in
autonomous vehicles and traffic surveillance, as it is likely to threaten human
life and cause property damage. Therefore, interference mitigation is of great
significance for millimeter-wave radar-based target detection. Currently, the
development of deep learning is rapid, but existing deep learning-based
interference mitigation models still have great limitations in terms of model
size and inference speed. For these reasons, we propose Radar-STDA, a
Radar-Spatial Temporal Denoising Autoencoder. Radar-STDA is an efficient
nano-level denoising autoencoder that takes into account both spatial and
temporal information of range-Doppler maps. Among other methods, it achieves a
maximum SINR of 17.08 dB with only 140,000 parameters. It obtains 207.6 FPS on
an RTX A4000 GPU and 56.8 FPS on an NVIDIA Jetson AGXXavier respectively when
denoising range-Doppler maps for three consecutive frames. Moreover, we release
a synthetic data set called Ra-inf for the task, which involves 384,769
range-Doppler maps with various clutters from objects of no interest and
receiver noise in realistic scenarios. To the best of our knowledge, Ra-inf is
the first synthetic dataset of radar interference. To support the community,
our research is open-source via the link
\url{https://github.com/GuanRunwei/rd_map_temporal_spatial_denoising_autoencoder}
Efficient-VRNet: An Exquisite Fusion Network for Riverway Panoptic Perception based on Asymmetric Fair Fusion of Vision and 4D mmWave Radar
Panoptic perception is essential to unmanned surface vehicles (USVs) for
autonomous navigation. The current panoptic perception scheme is mainly based
on vision only, that is, object detection and semantic segmentation are
performed simultaneously based on camera sensors. Nevertheless, the fusion of
camera and radar sensors is regarded as a promising method which could
substitute pure vision methods, but almost all works focus on object detection
only. Therefore, how to maximize and subtly fuse the features of vision and
radar to improve both detection and segmentation is a challenge. In this paper,
we focus on riverway panoptic perception based on USVs, which is a considerably
unexplored field compared with road panoptic perception. We propose
Efficient-VRNet, a model based on Contextual Clustering (CoC) and the
asymmetric fusion of vision and 4D mmWave radar, which treats both vision and
radar modalities fairly. Efficient-VRNet can simultaneously perform detection
and segmentation of riverway objects and drivable area segmentation.
Furthermore, we adopt an uncertainty-based panoptic perception training
strategy to train Efficient-VRNet. In the experiments, our Efficient-VRNet
achieves better performances on our collected dataset than other uni-modal
models, especially in adverse weather and environment with poor lighting
conditions. Our code and models are available at
\url{https://github.com/GuanRunwei/Efficient-VRNet}
Achelous: A Fast Unified Water-surface Panoptic Perception Framework based on Fusion of Monocular Camera and 4D mmWave Radar
Current perception models for different tasks usually exist in modular forms
on Unmanned Surface Vehicles (USVs), which infer extremely slowly in parallel
on edge devices, causing the asynchrony between perception results and USV
position, and leading to error decisions of autonomous navigation. Compared
with Unmanned Ground Vehicles (UGVs), the robust perception of USVs develops
relatively slowly. Moreover, most current multi-task perception models are huge
in parameters, slow in inference and not scalable. Oriented on this, we propose
Achelous, a low-cost and fast unified panoptic perception framework for
water-surface perception based on the fusion of a monocular camera and 4D
mmWave radar. Achelous can simultaneously perform five tasks, detection and
segmentation of visual targets, drivable-area segmentation, waterline
segmentation and radar point cloud segmentation. Besides, models in Achelous
family, with less than around 5 million parameters, achieve about 18 FPS on an
NVIDIA Jetson AGX Xavier, 11 FPS faster than HybridNets, and exceed YOLOX-Tiny
and Segformer-B0 on our collected dataset about 5 mAP and 0.7
mIoU, especially under situations of adverse weather, dark environments and
camera failure. To our knowledge, Achelous is the first comprehensive panoptic
perception framework combining vision-level and point-cloud-level tasks for
water-surface perception. To promote the development of the intelligent
transportation community, we release our codes in
\url{https://github.com/GuanRunwei/Achelous}.Comment: Accepted by ITSC 202
FindVehicle and VehicleFinder: A NER dataset for natural language-based vehicle retrieval and a keyword-based cross-modal vehicle retrieval system
Natural language (NL) based vehicle retrieval is a task aiming to retrieve a
vehicle that is most consistent with a given NL query from among all candidate
vehicles. Because NL query can be easily obtained, such a task has a promising
prospect in building an interactive intelligent traffic system (ITS). Current
solutions mainly focus on extracting both text and image features and mapping
them to the same latent space to compare the similarity. However, existing
methods usually use dependency analysis or semantic role-labelling techniques
to find keywords related to vehicle attributes. These techniques may require a
lot of pre-processing and post-processing work, and also suffer from extracting
the wrong keyword when the NL query is complex. To tackle these problems and
simplify, we borrow the idea from named entity recognition (NER) and construct
FindVehicle, a NER dataset in the traffic domain. It has 42.3k labelled NL
descriptions of vehicle tracks, containing information such as the location,
orientation, type and colour of the vehicle. FindVehicle also adopts both
overlapping entities and fine-grained entities to meet further requirements. To
verify its effectiveness, we propose a baseline NL-based vehicle retrieval
model called VehicleFinder. Our experiment shows that by using text encoders
pre-trained by FindVehicle, VehicleFinder achieves 87.7\% precision and 89.4\%
recall when retrieving a target vehicle by text command on our homemade dataset
based on UA-DETRAC. The time cost of VehicleFinder is 279.35 ms on one ARM v8.2
CPU and 93.72 ms on one RTX A4000 GPU, which is much faster than the
Transformer-based system. The dataset is open-source via the link
https://github.com/GuanRunwei/FindVehicle, and the implementation can be found
via the link https://github.com/GuanRunwei/VehicleFinder-CTIM
WaterScenes: A Multi-Task 4D Radar-Camera Fusion Dataset and Benchmark for Autonomous Driving on Water Surfaces
Autonomous driving on water surfaces plays an essential role in executing
hazardous and time-consuming missions, such as maritime surveillance, survivors
rescue, environmental monitoring, hydrography mapping and waste cleaning. This
work presents WaterScenes, the first multi-task 4D radar-camera fusion dataset
for autonomous driving on water surfaces. Equipped with a 4D radar and a
monocular camera, our Unmanned Surface Vehicle (USV) proffers all-weather
solutions for discerning object-related information, including color, shape,
texture, range, velocity, azimuth, and elevation. Focusing on typical static
and dynamic objects on water surfaces, we label the camera images and radar
point clouds at pixel-level and point-level, respectively. In addition to basic
perception tasks, such as object detection, instance segmentation and semantic
segmentation, we also provide annotations for free-space segmentation and
waterline segmentation. Leveraging the multi-task and multi-modal data, we
conduct numerous experiments on the single modality of radar and camera, as
well as the fused modalities. Results demonstrate that 4D radar-camera fusion
can considerably enhance the robustness of perception on water surfaces,
especially in adverse lighting and weather conditions. WaterScenes dataset is
public on https://waterscenes.github.io
FindVehicle and VehicleFinder: a NER dataset for natural language-based vehicle retrieval and a keyword-based cross-modal vehicle retrieval system
AbstractNatural language (NL) based vehicle retrieval is a task aiming to retrieve a vehicle that is most consistent with a given NL query from among all candidate vehicles. Because NL query can be easily obtained, such a task has a promising prospect in building an interactive intelligent traffic system (ITS). Current solutions mainly focus on extracting both text and image features and mapping them to the same latent space to compare the similarity. However, existing methods usually use dependency analysis or semantic role-labelling techniques to find keywords related to vehicle attributes. These techniques may require a lot of pre-processing and post-processing work, and also suffer from extracting the wrong keyword when the NL query is complex. To tackle these problems and simplify, we borrow the idea from named entity recognition (NER) and construct FindVehicle, a NER dataset in the traffic domain. It has 42.3k labelled NL descriptions of vehicle tracks, containing information such as the location, orientation, type and colour of the vehicle. FindVehicle also adopts both overlapping entities and fine-grained entities to meet further requirements. To verify its effectiveness, we propose a baseline NL-based vehicle retrieval model called VehicleFinder. Our experiment shows that by using text encoders pre-trained by FindVehicle, VehicleFinder achieves 87.7% precision and 89.4% recall when retrieving a target vehicle by text command on our homemade dataset based on UA-DETRAC [1]. From loading the command into VehicleFinder to identifying whether the target vehicle is consistent with the command, the time cost is 279.35 ms on one ARM v8.2 CPU and 93.72 ms on one RTX A4000 GPU, which is much faster than the Transformer-based system. The dataset is open-source via the link https://github.com/GuanRunwei/FindVehicle, and the implementation can be found via the link https://github.com/GuanRunwei/VehicleFinder-CTIM.</jats:p
Highly Sensitive Pressure Sensor Based on Elastic Conductive Microspheres
Elastic pressure sensors play a crucial role in the digital economy, such as in health care systems and human–machine interfacing. However, the low sensitivity of these sensors restricts their further development and wider application prospects. This issue can be resolved by introducing microstructures in flexible pressure-sensitive materials as a common method to improve their sensitivity. However, complex processes limit such strategies. Herein, a cost-effective and simple process was developed for manufacturing surface microstructures of flexible pressure-sensitive films. The strategy involved the combination of MXene–single-walled carbon nanotubes (SWCNT) with mass-produced Polydimethylsiloxane (PDMS) microspheres to form advanced microstructures. Next, the conductive silica gel films with pitted microstructures were obtained through a 3D-printed mold as flexible electrodes, and assembled into flexible resistive pressure sensors. The sensor exhibited a sensitivity reaching 2.6 kPa−1 with a short response time of 56 ms and a detection limit of 5.1 Pa. The sensor also displayed good cyclic stability and time stability, offering promising features for human health monitoring applications