48,099 research outputs found
RPNet: an End-to-End Network for Relative Camera Pose Estimation
This paper addresses the task of relative camera pose estimation from raw
image pixels, by means of deep neural networks. The proposed RPNet network
takes pairs of images as input and directly infers the relative poses, without
the need of camera intrinsic/extrinsic. While state-of-the-art systems based on
SIFT + RANSAC, are able to recover the translation vector only up to scale,
RPNet is trained to produce the full translation vector, in an end-to-end way.
Experimental results on the Cambridge Landmark dataset show very promising
results regarding the recovery of the full translation vector. They also show
that RPNet produces more accurate and more stable results than traditional
approaches, especially for hard images (repetitive textures, textureless
images, etc). To the best of our knowledge, RPNet is the first attempt to
recover full translation vectors in relative pose estimation
Learning how to be robust: Deep polynomial regression
Polynomial regression is a recurrent problem with a large number of
applications. In computer vision it often appears in motion analysis. Whatever
the application, standard methods for regression of polynomial models tend to
deliver biased results when the input data is heavily contaminated by outliers.
Moreover, the problem is even harder when outliers have strong structure.
Departing from problem-tailored heuristics for robust estimation of parametric
models, we explore deep convolutional neural networks. Our work aims to find a
generic approach for training deep regression models without the explicit need
of supervised annotation. We bypass the need for a tailored loss function on
the regression parameters by attaching to our model a differentiable hard-wired
decoder corresponding to the polynomial operation at hand. We demonstrate the
value of our findings by comparing with standard robust regression methods.
Furthermore, we demonstrate how to use such models for a real computer vision
problem, i.e., video stabilization. The qualitative and quantitative
experiments show that neural networks are able to learn robustness for general
polynomial regression, with results that well overpass scores of traditional
robust estimation methods.Comment: 18 pages, conferenc
Real-time model-based video stabilization for microaerial vehicles
The emerging branch of micro aerial vehicles (MAVs) has attracted a great interest for their indoor navigation capabilities, but they require a high quality video for tele-operated or autonomous tasks. A common problem of on-board video quality is the effect of undesired movements, so different approaches solve it with both mechanical stabilizers or video stabilizer software. Very few video stabilizer algorithms in the literature can be applied in real-time but they do not discriminate at all between intentional movements of the tele-operator and undesired ones. In this paper, a novel technique is introduced for real-time video stabilization with low computational cost, without generating false movements or decreasing the performance of the stabilized video sequence. Our proposal uses a combination of geometric transformations and outliers rejection to obtain a robust inter-frame motion estimation, and a Kalman filter based on an ANN learned model of the MAV that includes the control action for motion intention estimation.Peer ReviewedPostprint (author's final draft
- …