64 research outputs found
Deep Projective 3D Semantic Segmentation
Semantic segmentation of 3D point clouds is a challenging problem with
numerous real-world applications. While deep learning has revolutionized the
field of image semantic segmentation, its impact on point cloud data has been
limited so far. Recent attempts, based on 3D deep learning approaches
(3D-CNNs), have achieved below-expected results. Such methods require
voxelizations of the underlying point cloud data, leading to decreased spatial
resolution and increased memory consumption. Additionally, 3D-CNNs greatly
suffer from the limited availability of annotated datasets.
In this paper, we propose an alternative framework that avoids the
limitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first
project the point cloud onto a set of synthetic 2D-images. These images are
then used as input to a 2D-CNN, designed for semantic segmentation. Finally,
the obtained prediction scores are re-projected to the point cloud to obtain
the segmentation results. We further investigate the impact of multiple
modalities, such as color, depth and surface normals, in a multi-stream network
architecture. Experiments are performed on the recent Semantic3D dataset. Our
approach sets a new state-of-the-art by achieving a relative gain of 7.9 %,
compared to the previous best approach.Comment: Submitted to CAIP 201
Extrinisic Calibration of a Camera-Arm System Through Rotation Identification
Determining extrinsic calibration parameters is a necessity in any robotic
system composed of actuators and cameras. Once a system is outside the lab
environment, parameters must be determined without relying on outside artifacts
such as calibration targets. We propose a method that relies on structured
motion of an observed arm to recover extrinsic calibration parameters. Our
method combines known arm kinematics with observations of conics in the image
plane to calculate maximum-likelihood estimates for calibration extrinsics.
This method is validated in simulation and tested against a real-world model,
yielding results consistent with ruler-based estimates. Our method shows
promise for estimating the pose of a camera relative to an articulated arm's
end effector without requiring tedious measurements or external artifacts.
Index Terms: robotics, hand-eye problem, self-calibration, structure from
motio
- …