481 research outputs found
Vision and Learning for Deliberative Monocular Cluttered Flight
Cameras provide a rich source of information while being passive, cheap and
lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work
we present the first implementation of receding horizon control, which is
widely used in ground vehicles, with monocular vision as the only sensing mode
for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a
number of contributions: novel coupling of perception and control via relevant
and diverse, multiple interpretations of the scene around the robot, leveraging
recent advances in machine learning to showcase anytime budgeted cost-sensitive
feature selection, and fast non-linear regression for monocular depth
prediction. We empirically demonstrate the efficacy of our novel pipeline via
real world experiments of more than 2 kms through dense trees with a quadrotor
built from off-the-shelf parts. Moreover our pipeline is designed to combine
information from other modalities like stereo and lidar as well if available
Towards Visual Ego-motion Learning in Robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the
past decade, often restricted to the type of camera optics, or the underlying
motion manifold observed. We envision robots to be able to learn and perform
these tasks, in a minimally supervised setting, as they gain more experience.
To this end, we propose a fully trainable solution to visual ego-motion
estimation for varied camera optics. We propose a visual ego-motion learning
architecture that maps observed optical flow vectors to an ego-motion density
estimate via a Mixture Density Network (MDN). By modeling the architecture as a
Conditional Variational Autoencoder (C-VAE), our model is able to provide
introspective reasoning and prediction for ego-motion induced scene-flow.
Additionally, our proposed model is especially amenable to bootstrapped
ego-motion learning in robots where the supervision in ego-motion estimation
for a particular camera sensor can be obtained from standard navigation-based
sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through
experiments, we show the utility of our proposed approach in enabling the
concept of self-supervised learning for visual ego-motion estimation in
autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures,
2 table
Real-time kinematics for accurate geolocalization of images in telerobotic applications
The paper discusses a real-time kinematic system for accurate geolocalization of images, acquired though stereoscopic cameras mounted on a robot, particularly a teleoperated machinery. A teleoperated vehicle may be used to explore an unsafe environment and to acquire in real-time stereoscopic images through two cameras mounted on top of it. Each camera has a visible image sensor. For night operation, or in case temperature is an important parameter, each camera can be equipped with both visible and infrared image sensors. One of the main issues for telerobotic is the real-time and accurate geolocalization of the images, where an accuracy of few cm is required. Such value is much better than that that provided by GPS (Global Positioning System), which is in the order of few meters. To this aim, a real-time kinematic system is proposed which acquires the GPS signal of the vehicle, plus through an RF channel, the GPS signal of a reference base station, geolocalized with a cm-accuracy. To improve the robustness of the differential GPS system, also the data of an Inertial Measurement Unit are used. Another issue addressed in this paper is the real-time implementation of a stereoscopic image-processing algorithm to recover the 3D structure of the scene. The focus is on the 3D reconstruction of the scene to have the reference trajectory for the actuation done by a robotic arm with a proper end-effector
PROBE-GK: Predictive Robust Estimation using Generalized Kernels
Many algorithms in computer vision and robotics make strong assumptions about
uncertainty, and rely on the validity of these assumptions to produce accurate
and consistent state estimates. In practice, dynamic environments may degrade
sensor performance in predictable ways that cannot be captured with static
uncertainty parameters. In this paper, we employ fast nonparametric Bayesian
inference techniques to more accurately model sensor uncertainty. By setting a
prior on observation uncertainty, we derive a predictive robust estimator, and
show how our model can be learned from sample images, both with and without
knowledge of the motion used to generate the data. We validate our approach
through Monte Carlo simulations, and report significant improvements in
localization accuracy relative to a fixed noise model in several settings,
including on synthetic data, the KITTI dataset, and our own experimental
platform.Comment: In Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA'16), Stockholm, Sweden, May 16-21, 201
Recommended from our members
Real-time smart and standalone vision/IMU navigation sensor
In this paper, we present a smart, standalone, multi-platform stereo vision/IMU-based navigation system, providing ego-motion estimation. The real-time visual odometry algorithm is run on a nano ITX single-board computer (SBC) of 1.9 GHz CPU and 16-core GPU. High-resolution stereo images of 1.2 megapixel provide high-quality data. Tracking of up to 750 features is made possible at 5 fps thanks to a minimal, but efficient, features detection–stereo matching–feature tracking scheme runs on the GPU. Furthermore, the feature tracking algorithm benefits from assistance of a 100 Hz IMU whose accelerometer and gyroscope data provide inertial features prediction enhancing execution speed and tracking efficiency. In a space mission context, we demonstrate robustness and accuracy of the real-time generated 6-degrees-of-freedom trajectories from our visual odometry algorithm. Performance evaluations are comparable to ground truth measurements from an external motion capture system
- …