758 research outputs found
Hierarchical structure-and-motion recovery from uncalibrated images
This paper addresses the structure-and-motion problem, that requires to find
camera motion and 3D struc- ture from point matches. A new pipeline, dubbed
Samantha, is presented, that departs from the prevailing sequential paradigm
and embraces instead a hierarchical approach. This method has several
advantages, like a provably lower computational complexity, which is necessary
to achieve true scalability, and better error containment, leading to more
stability and less drift. Moreover, a practical autocalibration procedure
allows to process images without ancillary information. Experiments with real
data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI
Autocalibration with the Minimum Number of Cameras with Known Pixel Shape
In 3D reconstruction, the recovery of the calibration parameters of the
cameras is paramount since it provides metric information about the observed
scene, e.g., measures of angles and ratios of distances. Autocalibration
enables the estimation of the camera parameters without using a calibration
device, but by enforcing simple constraints on the camera parameters. In the
absence of information about the internal camera parameters such as the focal
length and the principal point, the knowledge of the camera pixel shape is
usually the only available constraint. Given a projective reconstruction of a
rigid scene, we address the problem of the autocalibration of a minimal set of
cameras with known pixel shape and otherwise arbitrarily varying intrinsic and
extrinsic parameters. We propose an algorithm that only requires 5 cameras (the
theoretical minimum), thus halving the number of cameras required by previous
algorithms based on the same constraint. To this purpose, we introduce as our
basic geometric tool the six-line conic variety (SLCV), consisting in the set
of planes intersecting six given lines of 3D space in points of a conic. We
show that the set of solutions of the Euclidean upgrading problem for three
cameras with known pixel shape can be parameterized in a computationally
efficient way. This parameterization is then used to solve autocalibration from
five or more cameras, reducing the three-dimensional search space to a
two-dimensional one. We provide experiments with real images showing the good
performance of the technique.Comment: 19 pages, 14 figures, 7 tables, J. Math. Imaging Vi
Method for 3D modelling based on structure from motion processing of sparse 2D images
A method based on Structure from Motion for processing a plurality of sparse images acquired by one or more acquisition devices to generate a sparse 3D points cloud and of a plurality of internal and external parameters of the acquisition devices includes the steps of collecting the images; extracting keypoints therefrom and generating keypoint descriptors; organizing the images in a proximity graph; pairwise image matching and generating keypoints connecting tracks according maximum proximity between keypoints; performing an autocalibration between image clusters to extract internal and external parameters of the acquisition devices, wherein calibration groups are defined that contain a plurality of image clusters and wherein a clustering algorithm iteratively merges the clusters in a model expressed in a common local reference system starting from clusters belonging to the same calibration group; and performing a Euclidean reconstruction of the object as a sparse 3D point cloud based on the extracted parameters
Astrometry with the Wide-Field InfraRed Space Telescope
The Wide-Field InfraRed Space Telescope (WFIRST) will be capable of
delivering precise astrometry for faint sources over the enormous field of view
of its main camera, the Wide-Field Imager (WFI). This unprecedented combination
will be transformative for the many scientific questions that require precise
positions, distances, and velocities of stars. We describe the expectations for
the astrometric precision of the WFIRST WFI in different scenarios, illustrate
how a broad range of science cases will see significant advances with such
data, and identify aspects of WFIRST's design where small adjustments could
greatly improve its power as an astrometric instrument.Comment: version accepted to JATI
A Study on UWB-Aided Localization for Multi-UAV Systems in GNSS-Denied Environments
Unmanned Aerial Vehicles (UAVs) have seen an increased penetration in industrial applications in recent years. Some of those applications have to be carried out in GNSS-denied environments. For this reason, several localization systems have emerged as an alternative to GNSS-based systems such as Lidar and Visual Odometry, Inertial Measurement Units (IMUs), and over the past years also UWB-based systems. UWB technology has increased its popularity in the robotics field due to its high accuracy distance estimation from ranging measurements of wireless signals, even in non-line-of-sight measurements. However, the applicability of most of the UWB-based localization systems is limited because they rely on a fixed set of nodes, named anchors, which requires prior calibration. In this thesis, we present a localization system based on UWB technology with a built-in collaborative algorithm for the online autocalibration of the anchors. This autocalibration method, enables the anchors to be movable and thus, to be used in ad-doc and dynamic deployments. The system is based on Decawave's DWM1001 UWB transceivers. Compared to Decawave's autopositioning algorithm we drastically reduce the calibration time while increasing accuracy. We provide both experimental measurements and simulation results to demonstrate the usability of this algorithm. We also present a comparison between our UWB-based and other non-GNSS localization systems for UAVs positioning in indoor environments
The Time-SIFT method : detecting 3-D changes from archival photogrammetric analysis with almost exclusively image information
Archival aerial imagery is a source of worldwide very high resolution data
for documenting paste 3-D changes. However, external information is required so
that accurate 3-D models can be computed from archival aerial imagery. In this
research, we propose and test a new method, termed Time-SIFT (Scale Invariant
Feature Transform), which allows for computing coherent multi-temporal Digital
Elevation Models (DEMs) with almost exclusively image information. This method
is based on the invariance properties of the SIFT-like methods which are at the
root of the Structure from Motion (SfM) algorithms. On a test site of 170 km2,
we applied SfM algorithms to a unique image block with all the images of four
different dates covering forty years. We compared this method to more classical
methods based on the use of affordable additional data such as ground control
points collected in recent orthophotos. We did extensive tests to determine
which processing choices were most impacting on the final result. With these
tests, we aimed at evaluating the potential of the proposed Time-SIFT method
for the detection and mapping of 3-D changes. Our study showed that the
Time-SIFT method was the prime criteria that allowed for computing informative
DEMs of difference with almost exclusively image information and limited
photogrammetric expertise and human intervention. Due to the fact that the
proposed Time-SIFT method can be automatically applied with exclusively image
information, our results pave the way to a systematic processing of the
archival aerial imagery on very large spatio-temporal windows, and should hence
greatly help the unlocking of archival aerial imagery for the documenting of
past 3-D changes
Affine Approximation for Direct Batch Recovery of Euclidean Motion From Sparse Data
We present a batch method for recovering Euclidian camera motion from sparse image data. The main purpose of the algorithm is to recover the motion parameters using as much of the available information and as few computational steps as possible. The algorithmthus places itself in the gap between factorisation schemes, which make use of all available information in the initial recovery step, and sequential approaches which are able to handle sparseness in the image data. Euclidian camera matrices are approximated via the affine camera model, thus making the recovery direct in the sense that no intermediate projective reconstruction is made. Using a little known closure constraint, the FA-closure, we are able to formulate the camera coefficients linearly in the entries of the affine fundamental matrices. The novelty of the presented work is twofold: Firstly the presented formulation allows for a particularly good conditioning of the estimation of the initial motion parameters but also for an unprecedented diversity in the choice of possible regularisation terms. Secondly, the new autocalibration scheme presented here is in practice guaranteed to yield a Least Squares Estimate of the calibration parameters. As a bi-product, the affine camera model is rehabilitated as a useful model for most cameras and scene configurations, e.g. wide angle lenses observing a scene at close range. Experiments on real and synthetic data demonstrate the ability to reconstruct scenes which are very problematic for previous structure from motion techniques due to local ambiguities and error accumulation
- …