42,055 research outputs found
Camera motion estimation through planar deformation determination
In this paper, we propose a global method for estimating the motion of a
camera which films a static scene. Our approach is direct, fast and robust, and
deals with adjacent frames of a sequence. It is based on a quadratic
approximation of the deformation between two images, in the case of a scene
with constant depth in the camera coordinate system. This condition is very
restrictive but we show that provided translation and depth inverse variations
are small enough, the error on optical flow involved by the approximation of
depths by a constant is small. In this context, we propose a new model of
camera motion, that allows to separate the image deformation in a similarity
and a ``purely'' projective application, due to change of optical axis
direction. This model leads to a quadratic approximation of image deformation
that we estimate with an M-estimator; we can immediatly deduce camera motion
parameters.Comment: 21 pages, version modifi\'ee accept\'e le 20 mars 200
Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators
Robust velocity and position estimation is crucial for autonomous robot
navigation. The optical flow based methods for autonomous navigation have been
receiving increasing attentions in tandem with the development of micro
unmanned aerial vehicles. This paper proposes a kernel cross-correlator (KCC)
based algorithm to determine optical flow using a monocular camera, which is
named as correlation flow (CF). Correlation flow is able to provide reliable
and accurate velocity estimation and is robust to motion blur. In addition, it
can also estimate the altitude velocity and yaw rate, which are not available
by traditional methods. Autonomous flight tests on a quadcopter show that
correlation flow can provide robust trajectory estimation with very low
processing power. The source codes are released based on the ROS framework.Comment: 2018 International Conference on Robotics and Automation (ICRA 2018
Exploring Convolutional Networks for End-to-End Visual Servoing
Present image based visual servoing approaches rely on extracting hand
crafted visual features from an image. Choosing the right set of features is
important as it directly affects the performance of any approach. Motivated by
recent breakthroughs in performance of data driven methods on recognition and
localization tasks, we aim to learn visual feature representations suitable for
servoing tasks in unstructured and unknown environments. In this paper, we
present an end-to-end learning based approach for visual servoing in diverse
scenes where the knowledge of camera parameters and scene geometry is not
available a priori. This is achieved by training a convolutional neural network
over color images with synchronised camera poses. Through experiments performed
in simulation and on a quadrotor, we demonstrate the efficacy and robustness of
our approach for a wide range of camera poses in both indoor as well as outdoor
environments.Comment: IEEE ICRA 201
Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter
There are many geometric calibration methods for "standard" cameras. These
methods, however, cannot be used for the calibration of telescopes with large
focal lengths and complex off-axis optics. Moreover, specialized calibration
methods for the telescopes are scarce in literature. We describe the
calibration method that we developed for the Colour and Stereo Surface Imaging
System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO).
Although our method is described in the context of CaSSIS, with camera-specific
experiments, it is general and can be applied to other telescopes. We further
encourage re-use of the proposed method by making our calibration code and data
available on-line.Comment: Submitted to Advances in Space Researc
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM
New vision sensors, such as the Dynamic and Active-pixel Vision sensor
(DAVIS), incorporate a conventional global-shutter camera and an event-based
sensor in the same pixel array. These sensors have great potential for
high-speed robotics and computer vision because they allow us to combine the
benefits of conventional cameras with those of event-based sensors: low
latency, high temporal resolution, and very high dynamic range. However, new
algorithms are required to exploit the sensor characteristics and cope with its
unconventional output, which consists of a stream of asynchronous brightness
changes (called "events") and synchronous grayscale frames. For this purpose,
we present and release a collection of datasets captured with a DAVIS in a
variety of synthetic and real environments, which we hope will motivate
research on new algorithms for high-speed and high-dynamic-range robotics and
computer-vision applications. In addition to global-shutter intensity images
and asynchronous events, we provide inertial measurements and ground-truth
camera poses from a motion-capture system. The latter allows comparing the pose
accuracy of ego-motion estimation algorithms quantitatively. All the data are
released both as standard text files and binary files (i.e., rosbag). This
paper provides an overview of the available data and describes a simulator that
we release open-source to create synthetic event-camera data.Comment: 7 pages, 4 figures, 3 table
- …