152,564 research outputs found
2.5D multi-view gait recognition based on point cloud registration
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM
DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration
We present DeepICP - a novel end-to-end learning-based 3D point cloud
registration framework that achieves comparable registration accuracy to prior
state-of-the-art geometric methods. Different from other keypoint based methods
where a RANSAC procedure is usually needed, we implement the use of various
deep neural network structures to establish an end-to-end trainable network.
Our keypoint detector is trained through this end-to-end structure and enables
the system to avoid the inference of dynamic objects, leverages the help of
sufficiently salient features on stationary objects, and as a result, achieves
high robustness. Rather than searching the corresponding points among existing
points, the key contribution is that we innovatively generate them based on
learned matching probabilities among a group of candidates, which can boost the
registration accuracy. Our loss function incorporates both the local similarity
and the global geometric constraints to ensure all above network designs can
converge towards the right direction. We comprehensively validate the
effectiveness of our approach using both the KITTI dataset and the
Apollo-SouthBay dataset. Results demonstrate that our method achieves
comparable or better performance than the state-of-the-art geometry-based
methods. Detailed ablation and visualization analysis are included to further
illustrate the behavior and insights of our network. The low registration error
and high robustness of our method makes it attractive for substantial
applications relying on the point cloud registration task.Comment: 10 pages, 6 figures, 3 tables, typos corrected, experimental results
updated, accepted by ICCV 201
Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras
Color-depth cameras (RGB-D cameras) have become the primary sensors in most
robotics systems, from service robotics to industrial robotics applications.
Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and
extrinsic calibration that generally does not meet the accuracy requirements
needed by many robotics applications (e.g., highly accurate 3D environment
reconstruction and mapping, high precision object recognition and localization,
...). In this paper, we propose a human-friendly, reliable and accurate
calibration framework that enables to easily estimate both the intrinsic and
extrinsic parameters of a general color-depth sensor couple. Our approach is
based on a novel two components error model. This model unifies the error
sources of RGB-D pairs based on different technologies, such as
structured-light 3D cameras and time-of-flight cameras. Our method provides
some important advantages compared to other state-of-the-art systems: it is
general (i.e., well suited for different types of sensors), based on an easy
and stable calibration protocol, provides a greater calibration accuracy, and
has been implemented within the ROS robotics framework. We report detailed
experimental validations and performance comparisons to support our statements
Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard
This paper presents a novel method for fully automatic and convenient
extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally
printed chessboard. The proposed method is based on the 3D corner estimation of
the chessboard from the sparse point cloud generated by one frame scan of the
LiDAR. To estimate the corners, we formulate a full-scale model of the
chessboard and fit it to the segmented 3D points of the chessboard. The model
is fitted by optimizing the cost function under constraints of correlation
between the reflectance intensity of laser and the color of the chessboard's
patterns. Powell's method is introduced for resolving the discontinuity problem
in optimization. The corners of the fitted model are considered as the 3D
corners of the chessboard. Once the corners of the chessboard in the 3D point
cloud are estimated, the extrinsic calibration of the two sensors is converted
to a 3D-2D matching problem. The corresponding 3D-2D points are used to
calculate the absolute pose of the two sensors with Unified Perspective-n-Point
(UPnP). Further, the calculated parameters are regarded as initial values and
are refined using the Levenberg-Marquardt method. The performance of the
proposed corner detection method from the 3D point cloud is evaluated using
simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR
and a Ladybug3 camera under the proposed re-projection error metric,
qualitatively and quantitatively demonstrate the accuracy and stability of the
final extrinsic calibration parameters.Comment: 20 pages, submitted to the journal of Remote Sensin
- …