54 research outputs found
Milli-RIO: Ego-Motion Estimation with Low-Cost Millimetre-Wave Radar
Robust indoor ego-motion estimation has attracted significant interest in the
last decades due to the fast-growing demand for location-based services in
indoor environments. Among various solutions, frequency-modulated
continuous-wave (FMCW) radar sensors in millimeter-wave (MMWave) spectrum are
gaining more prominence due to their intrinsic advantages such as penetration
capability and high accuracy. Single-chip low-cost MMWave radar as an emerging
technology provides an alternative and complementary solution for robust
ego-motion estimation, making it feasible in resource-constrained platforms
thanks to low-power consumption and easy system integration. In this paper, we
introduce Milli-RIO, an MMWave radar-based solution making use of a single-chip
low-cost radar and inertial measurement unit sensor to estimate
six-degrees-of-freedom ego-motion of a moving radar. Detailed quantitative and
qualitative evaluations prove that the proposed method achieves precisions on
the order of few centimeters for indoor localization tasks.Comment: Submitted to IEEE Sensors, 9page
Accurate Stereo Visual Odometry with Gamma Distributions
Point-based stereo visual odometry systems
typically estimate the camera motion by minimizing a cost function of the projection residuals between consecutive frames. Under some mild assumptions, such minimization is equivalent to maximizing
the probability of the measured residuals given
a certain pose change, for which a suitable model of the error distribution (sensor model) becomes of capital importance in order to obtain accurate results. This paper proposes a robust probabilistic model for projection errors, based on real world data. For that,
we argue that projection distances follow Gamma
distributions, and hence, the introduction of these
models in a probabilistic formulation of the motion
estimation process increases both precision and accuracy. Our approach has been validated through a series of experiments with both synthetic and real data, revealing an improvement in accuracy while not increasing the computational burden.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech. Project "PROMOVE: Advances in mobile robotics for promoting independent life of elders", funded by the Spanish Government and the "European Regional
Development Fund ERDF" under contract DPI2014-55826-R
Dynamic Rigid Motion Estimation From Weak Perspective
“Weak perspective” represents a simplified projection model that approximates the imaging process when the scene is viewed under a small viewing angle and its depth relief is small relative to its distance from the viewer. We study how to generate dynamic models for estimating rigid 3D motion from weak perspective. A crucial feature in dynamic visual motion estimation is to decouple structure from motion in the estimation model. The reasons are both geometric-to achieve global observability of the model-and practical, for a structure independent motion estimator allows us to deal with occlusions and appearance of new features in a principled way. It is also possible to push the decoupling even further, and isolate the motion parameters that are affected by the so called “bas relief ambiguity” from the ones that are not. We present a novel method for reducing the order of the estimator by decoupling portions of the state space from the time evolution of the measurement constraint. We use this method to construct an estimator of full rigid motion (modulo a scaling factor) on a six dimensional state space, an approximate estimator for a four dimensional subset of the motion space, and a reduced filter with only two states. The latter two are immune to the bas relief ambiguity. We compare strengths and weaknesses of each of the schemes on real and synthetic image sequences
Towards Visual Ego-motion Learning in Robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the
past decade, often restricted to the type of camera optics, or the underlying
motion manifold observed. We envision robots to be able to learn and perform
these tasks, in a minimally supervised setting, as they gain more experience.
To this end, we propose a fully trainable solution to visual ego-motion
estimation for varied camera optics. We propose a visual ego-motion learning
architecture that maps observed optical flow vectors to an ego-motion density
estimate via a Mixture Density Network (MDN). By modeling the architecture as a
Conditional Variational Autoencoder (C-VAE), our model is able to provide
introspective reasoning and prediction for ego-motion induced scene-flow.
Additionally, our proposed model is especially amenable to bootstrapped
ego-motion learning in robots where the supervision in ego-motion estimation
for a particular camera sensor can be obtained from standard navigation-based
sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through
experiments, we show the utility of our proposed approach in enabling the
concept of self-supervised learning for visual ego-motion estimation in
autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures,
2 table
Vehicle Trajectory from an Uncalibrated Stereo-Rig with Super-Homography
International audienceWe present in this article an original manner to estimate the trajectory of a vehicle running in urban-like areas. The method consists in extracting then tracking features (points, lines) with an uncalibrated stereo-rig from the road assumed as a plane to compute homographies relative to the camera(s) motions. The purposed method copes with the dense traffic conditions: the free space required (first ten meters in front of the vehicle) is slightly equivalent to the security distance between two vehicles. Experimental issues from real data are presented and discussed
Real Time Monocular Visual Odometry Using Hybrid Features and Distance Ratio for Scale Estimation
Real time dead reckoning navigation is important for supplying information of the current position of an autonomous mobile robot to complete its task, especially in certain areas such as hazardous and GPS-denied areas. Monocular visual odometry is a good choice as it is one of the dead reckoning navigation method which only uses single camera. For real time task, visual odometry requires fast feature extraction without ignoring its accuracy. Therefore, we propose the usage of a hybrid feature, i.e. Censure feature detector and upright SURF feature descriptor, as feature extraction. Yet, the scale ambiguity for the monocular visual odometry becomes a challenging problem. Without additional information from other sensors, estimating the scale is solely the only way. In our proposed work, distance ratio is employed to tackle such problems. Experimental results show the performance of the designed algorithm. A real example of running the proposed algorithm under an embedded device is also provided for demonstrating its real time capability
Accurate and Robust Scale Recovery for Monocular Visual Odometry Based on Plane Geometry
Scale ambiguity is a fundamental problem in monocular visual odometry.
Typical solutions include loop closure detection and environment information
mining. For applications like self-driving cars, loop closure is not always
available, hence mining prior knowledge from the environment becomes a more
promising approach. In this paper, with the assumption of a constant height of
the camera above the ground, we develop a light-weight scale recovery framework
leveraging an accurate and robust estimation of the ground plane. The framework
includes a ground point extraction algorithm for selecting high-quality points
on the ground plane, and a ground point aggregation algorithm for joining the
extracted ground points in a local sliding window. Based on the aggregated
data, the scale is finally recovered by solving a least-squares problem using a
RANSAC-based optimizer. Sufficient data and robust optimizer enable a highly
accurate scale recovery. Experiments on the KITTI dataset show that the
proposed framework can achieve state-of-the-art accuracy in terms of
translation errors, while maintaining competitive performance on the rotation
error. Due to the light-weight design, our framework also demonstrates a high
frequency of 20Hz on the dataset.Comment: Submitting to IEEE International Conference on Robotics and
Automation 202
A Variational Framework for Structure from Motion inOmnidirectional Image Sequences
We address the problem of depth and ego-motion estimation from omnidirectional images. We propose a correspondence-free structure-from-motion problem for sequences of images mapped on the 2-sphere. A novel graph-based variational framework is first proposed for depth estimation between pairs of images. The estimation is cast as a TV-L1 optimization problem that is solved by a fast graph-based algorithm. The ego-motion is then estimated directly from the depth information without explicit computation of the optical flow. Both problems are finally addressed together in an iterative algorithm that alternates between depth and ego-motion estimation for fast computation of 3D information from motion in image sequences. Experimental results demonstrate the effective performance of the proposed algorithm for 3D reconstruction from synthetic and natural omnidirectional image
Stereo Vision-based Semantic 3D Object and Ego-motion Tracking for Autonomous Driving
We propose a stereo vision-based approach for tracking the camera ego-motion
and 3D semantic objects in dynamic autonomous driving scenarios. Instead of
directly regressing the 3D bounding box using end-to-end approaches, we propose
to use the easy-to-labeled 2D detection and discrete viewpoint classification
together with a light-weight semantic inference method to obtain rough 3D
object measurements. Based on the object-aware-aided camera pose tracking which
is robust in dynamic environments, in combination with our novel dynamic object
bundle adjustment (BA) approach to fuse temporal sparse feature correspondences
and the semantic 3D measurement model, we obtain 3D object pose, velocity and
anchored dynamic point cloud estimation with instance accuracy and temporal
consistency. The performance of our proposed method is demonstrated in diverse
scenarios. Both the ego-motion estimation and object localization are compared
with the state-of-of-the-art solutions.Comment: 14 pages, 9 figures, eccv201
- …