12,771 research outputs found
Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor
Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level.
Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques.
Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use
Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate.
Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation
Low-rank SIFT: An Affine Invariant Feature for Place Recognition
In this paper, we present a novel affine-invariant feature based on SIFT,
leveraging the regular appearance of man-made objects. The feature achieves
full affine invariance without needing to simulate over affine parameter space.
Low-rank SIFT, as we name the feature, is based on our observation that local
tilt, which are caused by changes of camera axis orientation, could be
normalized by converting local patches to standard low-rank forms. Rotation,
translation and scaling invariance could be achieved in ways similar to SIFT.
As an extension of SIFT, our method seeks to add prior to solve the ill-posed
affine parameter estimation problem and normalizes them directly, and is
applicable to objects with regular structures. Furthermore, owing to recent
breakthrough in convex optimization, such parameter could be computed
efficiently. We will demonstrate its effectiveness in place recognition as our
major application. As extra contributions, we also describe our pipeline of
constructing geotagged building database from the ground up, as well as an
efficient scheme for automatic feature selection
Enabling Depth-driven Visual Attention on the iCub Humanoid Robot: Instructions for Use and New Perspectives
The importance of depth perception in the interactions that humans have
within their nearby space is a well established fact. Consequently, it is also
well known that the possibility of exploiting good stereo information would
ease and, in many cases, enable, a large variety of attentional and interactive
behaviors on humanoid robotic platforms. However, the difficulty of computing
real-time and robust binocular disparity maps from moving stereo cameras often
prevents from relying on this kind of cue to visually guide robots' attention
and actions in real-world scenarios. The contribution of this paper is
two-fold: first, we show that the Efficient Large-scale Stereo Matching
algorithm (ELAS) by A. Geiger et al. 2010 for computation of the disparity map
is well suited to be used on a humanoid robotic platform as the iCub robot;
second, we show how, provided with a fast and reliable stereo system,
implementing relatively challenging visual behaviors in natural settings can
require much less effort. As a case of study we consider the common situation
where the robot is asked to focus the attention on one object close in the
scene, showing how a simple but effective disparity-based segmentation solves
the problem in this case. Indeed this example paves the way to a variety of
other similar applications
On the confidence of stereo matching in a deep-learning era: a quantitative evaluation
Stereo matching is one of the most popular techniques to estimate dense depth
maps by finding the disparity between matching pixels on two, synchronized and
rectified images. Alongside with the development of more accurate algorithms,
the research community focused on finding good strategies to estimate the
reliability, i.e. the confidence, of estimated disparity maps. This information
proves to be a powerful cue to naively find wrong matches as well as to improve
the overall effectiveness of a variety of stereo algorithms according to
different strategies. In this paper, we review more than ten years of
developments in the field of confidence estimation for stereo matching. We
extensively discuss and evaluate existing confidence measures and their
variants, from hand-crafted ones to the most recent, state-of-the-art learning
based methods. We study the different behaviors of each measure when applied to
a pool of different stereo algorithms and, for the first time in literature,
when paired with a state-of-the-art deep stereo network. Our experiments,
carried out on five different standard datasets, provide a comprehensive
overview of the field, highlighting in particular both strengths and
limitations of learning-based strategies.Comment: TPAMI final versio
Measurement of Micro-bathymetry with a GOPRO Underwater Stereo Camera Pair
A GO-PRO underwater stereo camera kit has been used to measure the 3D topography (bathymetry) of a patch of seafloor producing a point cloud with a spatial data density of 15 measurements per 3 mm grid square and an standard deviation of less than 1 cm A GO-PRO camera is a fixed focus, 11 megapixel, still-frame (or 1080p high-definition video) camera, whose small form-factor and water-proof housing has made it popular with sports enthusiasts. A stereo camera kit is available providing a waterproof housing (to 61 m / 200 ft) for a pair of cameras. Measures of seafloor micro-bathymetrycapable of resolving seafloor features less than 1 cm in amplitude were possible from the stereoreconstruction. Bathymetric measurements of this scale provide important ground-truth data and boundary condition information for modeling of larger scale processes whose details depend on small-scale variations. Examples include modeling of turbulent water layers, seafloor sediment transfer and acoustic backscatter from bathymetric echo sounders
- …