31,969 research outputs found
Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System
Az http://intechweb.org/ alatti "Books" fül alatt kell rákeresni a "Stereo Vision" címre és az 1. fejezetre
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
Automatic reconstruction of 3D models from images using multi-view
Structure-from-Motion methods has been one of the most fruitful outcomes of
computer vision. These advances combined with the growing popularity of Micro
Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools
ubiquitous for large number of Architecture, Engineering and Construction
applications among audiences, mostly unskilled in computer vision. However, to
obtain high-resolution and accurate reconstructions from a large-scale object
using SfM, there are many critical constraints on the quality of image data,
which often become sources of inaccuracy as the current 3D reconstruction
pipelines do not facilitate the users to determine the fidelity of input data
during the image acquisition. In this paper, we present and advocate a
closed-loop interactive approach that performs incremental reconstruction in
real-time and gives users an online feedback about the quality parameters like
Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We
also propose a novel multi-scale camera network design to prevent scene drift
caused by incremental map building, and release the first multi-scale image
sequence dataset as a benchmark. Further, we evaluate our system on real
outdoor scenes, and show that our interactive pipeline combined with a
multi-scale camera network approach provides compelling accuracy in multi-view
reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and
Automation (ICRA '15), Seattle, WA, US
Fast and Accurate Algorithm for Eye Localization for Gaze Tracking in Low Resolution Images
Iris centre localization in low-resolution visible images is a challenging
problem in computer vision community due to noise, shadows, occlusions, pose
variations, eye blinks, etc. This paper proposes an efficient method for
determining iris centre in low-resolution images in the visible spectrum. Even
low-cost consumer-grade webcams can be used for gaze tracking without any
additional hardware. A two-stage algorithm is proposed for iris centre
localization. The proposed method uses geometrical characteristics of the eye.
In the first stage, a fast convolution based approach is used for obtaining the
coarse location of iris centre (IC). The IC location is further refined in the
second stage using boundary tracing and ellipse fitting. The algorithm has been
evaluated in public databases like BioID, Gi4E and is found to outperform the
state of the art methods.Comment: 12 pages, 10 figures, IET Computer Vision, 201
NICMOS Imaging of a Damped Lyman-alpha Absorber at z=1.89 toward LBQS 1210+1731 : Constraints on Size and Star Formation Rate
We report results of a high-resolution imaging search (in rest frame
H- and optical continuum) for the galaxy associated with the damped
Lyman- (DLA) absorber at toward the quasar
LBQS 1210+1731, using HST/NICMOS. After PSF subtraction, a feature is seen in
both the broad-band and narrow-band images, at a projected separation of
0.25\arcsec from the quasar. If associated with the DLA, the object would be
kpc in size with a flux of Jy in
the F160W filter, implying a luminosity at {\AA} in
the rest frame of L at ,
for . However, no significant H- emission is seen,
suggesting a low star formation rate (SFR) (3 upper limit of 4.0
M yr), or very high dust obscuration.
Alternatively, the object may be associated with the host galaxy of the quasar.
H-band images obtained with the NICMOS camera 2 coronagraph show a much fainter
structure kpc in size and containing four knots of
continuum emission, located 0.7\arcsec away from the quasar. We have probed
regions far closer to the quasar sight-line than in most previous studies of
high-redshift intervening DLAs. The two objects we report mark the closest
detected high-redshift DLA candidates yet to any quasar sight line. If the
features in our images are associated with the DLA, they suggest faint,
compact, somewhat clumpy objects rather than large, well-formed proto-galactic
disks or spheroids.Comment: 52 pages of text, 19 figures, To be published in Astrophysical
Journal (accepted Dec. 8, 1999
A Novel Framework for Highlight Reflectance Transformation Imaging
We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa
Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings
Conventional feature-based and model-based gaze estimation methods have
proven to perform well in settings with controlled illumination and specialized
cameras. In unconstrained real-world settings, however, such methods are
surpassed by recent appearance-based methods due to difficulties in modeling
factors such as illumination changes and other visual artifacts. We present a
novel learning-based method for eye region landmark localization that enables
conventional methods to be competitive to latest appearance-based methods.
Despite having been trained exclusively on synthetic data, our method exceeds
the state of the art for iris localization and eye shape registration on
real-world imagery. We then use the detected landmarks as input to iterative
model-fitting and lightweight learning-based gaze estimation methods. Our
approach outperforms existing model-fitting and appearance-based methods in the
context of person-independent and personalized gaze estimation
- …