7,157 research outputs found
Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map
An algorithm for pose and motion estimation using corresponding features in
omnidirectional images and a digital terrain map is proposed. In previous
paper, such algorithm for regular camera was considered. Using a Digital
Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables
recovering the absolute position and orientation of the camera. In order to do
this, the DTM is used to formulate a constraint between corresponding features
in two consecutive frames. In this paper, these constraints are extended to
handle non-central projection, as is the case with many omnidirectional
systems. The utilization of omnidirectional data is shown to improve the
robustness and accuracy of the navigation algorithm. The feasibility of this
algorithm is established through lab experimentation with two kinds of
omnidirectional acquisition systems. The first one is polydioptric cameras
while the second is catadioptric camera.Comment: 6 pages, 9 figure
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
Cameras are a crucial exteroceptive sensor for self-driving cars as they are
low-cost and small, provide appearance information about the environment, and
work in various weather conditions. They can be used for multiple purposes such
as visual navigation and obstacle detection. We can use a surround multi-camera
system to cover the full 360-degree field-of-view around the car. In this way,
we avoid blind spots which can otherwise lead to accidents. To minimize the
number of cameras needed for surround perception, we utilize fisheye cameras.
Consequently, standard vision pipelines for 3D mapping, visual localization,
obstacle detection, etc. need to be adapted to take full advantage of the
availability of multiple cameras rather than treat each camera individually. In
addition, processing of fisheye images has to be supported. In this paper, we
describe the camera calibration and subsequent processing pipeline for
multi-fisheye-camera systems developed as part of the V-Charge project. This
project seeks to enable automated valet parking for self-driving cars. Our
pipeline is able to precisely calibrate multi-camera systems, build sparse 3D
maps for visual navigation, visually localize the car with respect to these
maps, generate accurate dense maps, as well as detect obstacles based on
real-time depth map extraction
Monocular navigation for long-term autonomy
We present a reliable and robust monocular navigation system for an autonomous vehicle.
The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS.
Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach.
In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled.
We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound.
The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes.
This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation
Performance prediction of point-based three-dimensional volumetric measurement systems
Point-based three-dimensional volumetric measurement systems are defined as multi-view vision systems which reconstruct a three-dimensional scene by first identifying key points on the views and then performing the reconstruction. Examples of these are defocusing digital particle image velocimetry (DDPIV) (Pereira et al 2000 Exp. Fluids 29 S78â84) and 3D particle tracking velocimetry (3DPTV) (Papantoniou and Maas 1990 5th Int. Symp. on the Application of Laser Techniques in Fluid Mechanics) which reconstruct clouds of flow tracers in order to estimate flow velocities. The reconstruction algorithms in these systems are variations of an epipolar line search. This paper presents a generalized error analysis of such methods, both in reconstruction precision (error in the reconstructed scene) and reconstruction quality (number of ambiguities or 'ghosts' produced)
Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes
In this paper we address the problem of multiple camera calibration in the
presence of a homogeneous scene, and without the possibility of employing
calibration object based methods. The proposed solution exploits salient
features present in a larger field of view, but instead of employing active
vision we replace the cameras with stereo rigs featuring a long focal analysis
camera, as well as a short focal registration camera. Thus, we are able to
propose an accurate solution which does not require intrinsic variation models
as in the case of zooming cameras. Moreover, the availability of the two views
simultaneously in each rig allows for pose re-estimation between rigs as often
as necessary. The algorithm has been successfully validated in an indoor
setting, as well as on a difficult scene featuring a highly dense pilgrim crowd
in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application
Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor
Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level.
Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a sceneâs geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques.
Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use
Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate.
Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the sceneâs geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation
Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation
Accounting for 26% of all new cancer cases worldwide, breast cancer remains
the most common form of cancer in women. Although early breast cancer has a
favourable long-term prognosis, roughly a third of patients suffer from a
suboptimal aesthetic outcome despite breast conserving cancer treatment.
Clinical-quality 3D modelling of the breast surface therefore assumes an
increasingly important role in advancing treatment planning, prediction and
evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive
and either infrastructure-heavy or subject to motion artefacts. In this paper
we employ a single consumer-grade RGBD camera with an ICP-based registration
approach to jointly align all points from a sequence of depth images
non-rigidly. Subtle body deformation due to postural sway and respiration is
successfully mitigated leading to a higher geometric accuracy through
regularised locally affine transformations. We present results from 6 clinical
cases where our method compares well with the gold standard and outperforms a
previous approach. We show that our method produces better reconstructions
qualitatively by visual assessment and quantitatively by consistently obtaining
lower landmark error scores and yielding more accurate breast volume estimates
- âŠ