8,192 research outputs found
On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots
This paper describes a camera and hand-eye
calibration methodology for integrating an active binocular
robot head within a dual-arm robot. For this purpose, we
derive the forward kinematic model of our active robot head
and describe our methodology for calibrating and integrating
our robot head. This rigid calibration provides a closedform
hand-to-eye solution. We then present an approach for
updating dynamically camera external parameters for optimal
3D reconstruction that are the foundation for robotic tasks such
as grasping and manipulating rigid and deformable objects. We
show from experimental results that our robot head achieves
an overall sub millimetre accuracy of less than 0.3 millimetres
while recovering the 3D structure of a scene. In addition, we
report a comparative study between current RGBD cameras
and our active stereo head within two dual-arm robotic testbeds
that demonstrates the accuracy and portability of our proposed
methodology
Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes
In this paper we address the problem of multiple camera calibration in the
presence of a homogeneous scene, and without the possibility of employing
calibration object based methods. The proposed solution exploits salient
features present in a larger field of view, but instead of employing active
vision we replace the cameras with stereo rigs featuring a long focal analysis
camera, as well as a short focal registration camera. Thus, we are able to
propose an accurate solution which does not require intrinsic variation models
as in the case of zooming cameras. Moreover, the availability of the two views
simultaneously in each rig allows for pose re-estimation between rigs as often
as necessary. The algorithm has been successfully validated in an indoor
setting, as well as on a difficult scene featuring a highly dense pilgrim crowd
in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application
Flexible Stereo: Constrained, Non-rigid, Wide-baseline Stereo Vision for Fixed-wing Aerial Platforms
This paper proposes a computationally efficient method to estimate the
time-varying relative pose between two visual-inertial sensor rigs mounted on
the flexible wings of a fixed-wing unmanned aerial vehicle (UAV). The estimated
relative poses are used to generate highly accurate depth maps in real-time and
can be employed for obstacle avoidance in low-altitude flights or landing
maneuvers. The approach is structured as follows: Initially, a wing model is
identified by fitting a probability density function to measured deviations
from the nominal relative baseline transformation. At run-time, the prior
knowledge about the wing model is fused in an Extended Kalman filter~(EKF)
together with relative pose measurements obtained from solving a relative
perspective N-point problem (PNP), and the linear accelerations and angular
velocities measured by the two inertial measurement units (IMU) which are
rigidly attached to the cameras. Results obtained from extensive synthetic
experiments demonstrate that our proposed framework is able to estimate highly
accurate baseline transformations and depth maps.Comment: Accepted for publication in IEEE International Conference on Robotics
and Automation (ICRA), 2018, Brisban
CED: Color Event Camera Dataset
Event cameras are novel, bio-inspired visual sensors, whose pixels output
asynchronous and independent timestamped spikes at local intensity changes,
called 'events'. Event cameras offer advantages over conventional frame-based
cameras in terms of latency, high dynamic range (HDR) and temporal resolution.
Until recently, event cameras have been limited to outputting events in the
intensity channel, however, recent advances have resulted in the development of
color event cameras, such as the Color-DAVIS346. In this work, we present and
release the first Color Event Camera Dataset (CED), containing 50 minutes of
footage with both color frames and events. CED features a wide variety of
indoor and outdoor scenes, which we hope will help drive forward event-based
vision research. We also present an extension of the event camera simulator
ESIM that enables simulation of color events. Finally, we present an evaluation
of three state-of-the-art image reconstruction methods that can be used to
convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to
visualise the event stream, and for use in downstream vision applications.Comment: Conference on Computer Vision and Pattern Recognition Workshop
Efficient Autonomous Navigation for Planetary Rovers with Limited Resources
Rovers operating on Mars are in need of more and more autonomous features to ful ll their
challenging mission requirements. However, the inherent constraints of space systems make
the implementation of complex algorithms an expensive and difficult task. In this paper
we propose a control architecture for autonomous navigation. Efficient implementations of
autonomous features are built on top of the current ExoMars navigation method, enhancing
the safety and traversing capabilities of the rover. These features allow the rover to detect
and avoid hazards and perform long traverses by following a roughly safe path planned by
operators on ground. The control architecture implementing the proposed navigation mode
has been tested during a field test campaign on a planetary analogue terrain. The experiments
evaluated the proposed approach, autonomously completing two long traverses while
avoiding hazards. The approach only relies on the optical Localization Cameras stereobench,
a sensor that is found in all rovers launched so far, and potentially allows for computationally
inexpensive long-range autonomous navigation in terrains of medium difficulty
- âŠ