Recent research has shown the effectiveness of mmWave radar sensing for
object detection in low visibility environments, which makes it an ideal
technique in autonomous navigation systems. In this paper, we introduce Radar
to Point Cloud (R2P), a deep learning model that generates smooth, dense, and
highly accurate point cloud representation of a 3D object with fine geometry
details, based on rough and sparse point clouds with incorrect points obtained
from mmWave radar. These input point clouds are converted from the 2D depth
images that are generated from raw mmWave radar sensor data, characterized by
inconsistency, and orientation and shape errors. R2P utilizes an architecture
of two sequential deep learning encoder-decoder blocks to extract the essential
features of those radar-based input point clouds of an object when observed
from multiple viewpoints, and to ensure the internal consistency of a generated
output point cloud and its accurate and detailed shape reconstruction of the
original object. We implement R2P to replace Stage 2 of our recently proposed
3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system. Our experiments
demonstrate the significant performance improvement of R2P over the popular
existing methods such as PointNet, PCN, and the original 3DRIMR design.Comment: arXiv admin note: text overlap with arXiv:2109.0918