25,619 research outputs found
Extrinisic Calibration of a Camera-Arm System Through Rotation Identification
Determining extrinsic calibration parameters is a necessity in any robotic
system composed of actuators and cameras. Once a system is outside the lab
environment, parameters must be determined without relying on outside artifacts
such as calibration targets. We propose a method that relies on structured
motion of an observed arm to recover extrinsic calibration parameters. Our
method combines known arm kinematics with observations of conics in the image
plane to calculate maximum-likelihood estimates for calibration extrinsics.
This method is validated in simulation and tested against a real-world model,
yielding results consistent with ruler-based estimates. Our method shows
promise for estimating the pose of a camera relative to an articulated arm's
end effector without requiring tedious measurements or external artifacts.
Index Terms: robotics, hand-eye problem, self-calibration, structure from
motio
Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System
Az http://intechweb.org/ alatti "Books" fĂŒl alatt kell rĂĄkeresni a "Stereo Vision" cĂmre Ă©s az 1. fejezetre
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
Cameras are a crucial exteroceptive sensor for self-driving cars as they are
low-cost and small, provide appearance information about the environment, and
work in various weather conditions. They can be used for multiple purposes such
as visual navigation and obstacle detection. We can use a surround multi-camera
system to cover the full 360-degree field-of-view around the car. In this way,
we avoid blind spots which can otherwise lead to accidents. To minimize the
number of cameras needed for surround perception, we utilize fisheye cameras.
Consequently, standard vision pipelines for 3D mapping, visual localization,
obstacle detection, etc. need to be adapted to take full advantage of the
availability of multiple cameras rather than treat each camera individually. In
addition, processing of fisheye images has to be supported. In this paper, we
describe the camera calibration and subsequent processing pipeline for
multi-fisheye-camera systems developed as part of the V-Charge project. This
project seeks to enable automated valet parking for self-driving cars. Our
pipeline is able to precisely calibrate multi-camera systems, build sparse 3D
maps for visual navigation, visually localize the car with respect to these
maps, generate accurate dense maps, as well as detect obstacles based on
real-time depth map extraction
Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes
In this paper we address the problem of multiple camera calibration in the
presence of a homogeneous scene, and without the possibility of employing
calibration object based methods. The proposed solution exploits salient
features present in a larger field of view, but instead of employing active
vision we replace the cameras with stereo rigs featuring a long focal analysis
camera, as well as a short focal registration camera. Thus, we are able to
propose an accurate solution which does not require intrinsic variation models
as in the case of zooming cameras. Moreover, the availability of the two views
simultaneously in each rig allows for pose re-estimation between rigs as often
as necessary. The algorithm has been successfully validated in an indoor
setting, as well as on a difficult scene featuring a highly dense pilgrim crowd
in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application
Calibration Wizard: A Guidance System for Camera Calibration Based on Modelling Geometric and Corner Uncertainty
It is well known that the accuracy of a calibration depends strongly on the
choice of camera poses from which images of a calibration object are acquired.
We present a system -- Calibration Wizard -- that interactively guides a user
towards taking optimal calibration images. For each new image to be taken, the
system computes, from all previously acquired images, the pose that leads to
the globally maximum reduction of expected uncertainty on intrinsic parameters
and then guides the user towards that pose. We also show how to incorporate
uncertainty in corner point position in a novel principled manner, for both,
calibration and computation of the next best pose. Synthetic and real-world
experiments are performed to demonstrate the effectiveness of Calibration
Wizard.Comment: Oral presentation at ICCV 201
Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern
Line scanning cameras, which capture only a single line of pixels, have been
increasingly used in ground based mobile or robotic platforms. In applications
where it is advantageous to directly georeference the camera data to world
coordinates, an accurate estimate of the camera's 6D pose is required. This
paper focuses on the common case where a mobile platform is equipped with a
rigidly mounted line scanning camera, whose pose is unknown, and a navigation
system providing vehicle body pose estimates. We propose a novel method that
estimates the camera's pose relative to the navigation system. The approach
involves imaging and manually labelling a calibration pattern with distinctly
identifiable points, triangulating these points from camera and navigation
system data and reprojecting them in order to compute a likelihood, which is
maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte
Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset.
Tested on two different platforms, the method was able to estimate the pose to
within 0.06 m / 1.05 and 0.18 m / 2.39. We also propose
several approaches to displaying and interpreting the 6D results in a human
readable way.Comment: Published in MDPI Sensors, 30 October 201
- âŠ