837 research outputs found
Smooth Key-framing using the Image Plane
This paper demonstrates the use of image-space constraints for key frame interpolation. Interpolating in image-space results in sequences with predictable and controlable image trajectories and projected size for selected objects, particularly in cases where the desired center of rotation is not fixed or when the key frames contain perspective distortion changes. Additionally, we provide the user with direct image-space control over {\em how} the key frames are interpolated by allowing them to directly edit the object\u27s projected size and trajectory. Image-space key frame interpolation requires solving the inverse camera problem over a sequence of point constraints. This is a variation of the standard camera pose problem, with the additional constraint that the sequence be visually smooth. We use image-space camera interpolation to globally control the projection, and traditional camera interpolation locally to avoid smoothness problems. We compare and contrast three different constraint-solving systems in terms of accuracy, speed, and stability. The first approach was originally developed to solve this problem [Gleicher and Witken 1992]; we extend it to include internal camera parameter changes. The second approach uses a standard single-frame solver. The third approach is based on a novel camera formulation and we show that it is particularly suited to solving this problem
Trajectory Representation and Landmark Projection for Continuous-Time Structure from Motion
This paper revisits the problem of continuous-time structure from motion, and
introduces a number of extensions that improve convergence and efficiency. The
formulation with a -continuous spline for the trajectory
naturally incorporates inertial measurements, as derivatives of the sought
trajectory. We analyse the behaviour of split interpolation on
and on , and a joint interpolation on , and show
that the latter implicitly couples the direction of translation and rotation.
Such an assumption can make good sense for a camera mounted on a robot arm, but
not for hand-held or body-mounted cameras. Our experiments show that split
interpolation on and on is preferable over
interpolation in all tested cases. Finally, we investigate the
problem of landmark reprojection on rolling shutter cameras, and show that the
tested reprojection methods give similar quality, while their computational
load varies by a factor of 2.Comment: Submitted to IJR
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM
New vision sensors, such as the Dynamic and Active-pixel Vision sensor
(DAVIS), incorporate a conventional global-shutter camera and an event-based
sensor in the same pixel array. These sensors have great potential for
high-speed robotics and computer vision because they allow us to combine the
benefits of conventional cameras with those of event-based sensors: low
latency, high temporal resolution, and very high dynamic range. However, new
algorithms are required to exploit the sensor characteristics and cope with its
unconventional output, which consists of a stream of asynchronous brightness
changes (called "events") and synchronous grayscale frames. For this purpose,
we present and release a collection of datasets captured with a DAVIS in a
variety of synthetic and real environments, which we hope will motivate
research on new algorithms for high-speed and high-dynamic-range robotics and
computer-vision applications. In addition to global-shutter intensity images
and asynchronous events, we provide inertial measurements and ground-truth
camera poses from a motion-capture system. The latter allows comparing the pose
accuracy of ego-motion estimation algorithms quantitatively. All the data are
released both as standard text files and binary files (i.e., rosbag). This
paper provides an overview of the available data and describes a simulator that
we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table
Physical Interaction of Autonomous Robots in Complex Environments
Recent breakthroughs in the fields of computer vision and robotics are firmly changing the people perception about robots. The idea of robots that substitute humansisnowturningintorobotsthatcollaboratewiththem. Serviceroboticsconsidersrobotsaspersonalassistants. Itsafelyplacesrobotsindomesticenvironments in order to facilitate humans daily life. Industrial robotics is now reconsidering its basic idea of robot as a worker. Currently, the primary method to guarantee the personnels safety in industrial environments is the installation of physical barriers around the working area of robots. The development of new technologies and new algorithms in the sensor field and in the robotic one has led to a new generation of lightweight and collaborative robots. Therefore, industrial robotics leveraged the intrinsic properties of this kind of robots to generate a robot co-worker that is able to safely coexist, collaborate and interact inside its workspace with both personnels and objects. This Ph.D. dissertation focuses on the generation of a pipeline for fast object pose estimation and distance computation of moving objects,in both structured and unstructured environments,using RGB-D images. This pipeline outputs the command actions which let the robot complete its main task and fulfil the safety human-robot coexistence behaviour at once. The proposed pipeline is divided into an object segmentation part,a 6D.o.F. object pose estimation part and a real-time collision avoidance part for safe human-robot coexistence. Firstly, the segmentation module finds candidate object clusters out of RGB-D images of clutter scenes using a graph-based image segmentation technique. This segmentation technique generates a cluster of pixels for each object found in the image. The candidate object clusters are then fed as input to the 6 D.o.F. object pose estimation module. The latter is in charge of estimating both the translation and the orientation in 3D space of each candidate object clusters. The object pose is then employed by the robotic arm to compute a suitable grasping policy. The last module generates a force vector field of the environment surrounding the robot, the objects and the humans. This force vector field drives the robot toward its goal while any potential collision against objects and/or humans is safely avoided. This work has been carried out at Politecnico di Torino, in collaboration with Telecom Italia S.p.A
LO-Net: Deep Real-time Lidar Odometry
We present a novel deep convolutional network pipeline, LO-Net, for real-time
lidar odometry estimation. Unlike most existing lidar odometry (LO) estimations
that go through individually designed feature selection, feature matching, and
pose estimation pipeline, LO-Net can be trained in an end-to-end manner. With a
new mask-weighted geometric constraint loss, LO-Net can effectively learn
feature representation for LO estimation, and can implicitly exploit the
sequential dependencies and dynamics in the data. We also design a scan-to-map
module, which uses the geometric and semantic information learned in LO-Net, to
improve the estimation accuracy. Experiments on benchmark datasets demonstrate
that LO-Net outperforms existing learning based approaches and has similar
accuracy with the state-of-the-art geometry-based approach, LOAM
Tightly Coupled 3D Lidar Inertial Odometry and Mapping
Ego-motion estimation is a fundamental requirement for most mobile robotic
applications. By sensor fusion, we can compensate the deficiencies of
stand-alone sensors and provide more reliable estimations. We introduce a
tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing
the cost derived from lidar and IMU measurements, the lidar-IMU odometry (LIO)
can perform well with acceptable drift after long-term experiment, even in
challenging cases where the lidar measurements can be degraded. Besides, to
obtain more reliable estimations of the lidar poses, a rotation-constrained
refinement algorithm (LIO-mapping) is proposed to further align the lidar poses
with the global map. The experiment results demonstrate that the proposed
method can estimate the poses of the sensor pair at the IMU update rate with
high precision, even under fast motion conditions or with insufficient
features.Comment: Accepted by ICRA 201
- …