50,949 research outputs found
Three-dimensional measurements with a novel technique combination of confocal and focus variation with a simultaneous scan
The most common optical measurement technologies used today for the three dimensional measurement of technical surfaces are Coherence Scanning Interferometry (CSI), Imaging Confocal Microscopy (IC), and Focus Variation (FV). Each one has its benefits and its drawbacks. FV will be the ideal technology for the measurement of those regions where the slopes are high and where the surface is very rough, while CSI and IC will provide better results for smoother and flatter surface regions. In this work we investigated the benefits and drawbacks of combining Interferometry, Confocal and focus variation to get better measurement of technical surfaces. We investigated a way of using Microdisplay Scanning type of Confocal Microscope to acquire on a simultaneous scan confocal and focus Variation information to reconstruct a three dimensional measurement. Several methods are presented to fuse the optical sectioning properties of both techniques as well as the topographical information. This work shows the benefit of this combination technique on several industrial samples where neither confocal nor focus variation is able to provide optimal results.Postprint (author's final draft
High-resolution optical and SAR image fusion for building database updating
This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of DempsterâShafer evidence theory
Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction
The ultimate goal of many image-based modeling systems is to render
photo-realistic novel views of a scene without visible artifacts. Existing
evaluation metrics and benchmarks focus mainly on the geometric accuracy of the
reconstructed model, which is, however, a poor predictor of visual accuracy.
Furthermore, using only geometric accuracy by itself does not allow evaluating
systems that either lack a geometric scene representation or utilize coarse
proxy geometry. Examples include light field or image-based rendering systems.
We propose a unified evaluation approach based on novel view prediction error
that is able to analyze the visual quality of any method that can render novel
views from input images. One of the key advantages of this approach is that it
does not require ground truth geometry. This dramatically simplifies the
creation of test datasets and benchmarks. It also allows us to evaluate the
quality of an unknown scene during the acquisition and reconstruction process,
which is useful for acquisition planning. We evaluate our approach on a range
of methods including standard geometry-plus-texture pipelines as well as
image-based rendering techniques, compare it to existing geometry-based
benchmarks, and demonstrate its utility for a range of use cases.Comment: 10 pages, 12 figures, paper was submitted to ACM Transactions on
Graphics for revie
SurfelMeshing: Online Surfel-Based Mesh Reconstruction
We address the problem of mesh reconstruction from live RGB-D video, assuming
a calibrated camera and poses provided externally (e.g., by a SLAM system). In
contrast to most existing approaches, we do not fuse depth measurements in a
volume but in a dense surfel cloud. We asynchronously (re)triangulate the
smoothed surfels to reconstruct a surface mesh. This novel approach enables to
maintain a dense surface representation of the scene during SLAM which can
quickly adapt to loop closures. This is possible by deforming the surfel cloud
and asynchronously remeshing the surface where necessary. The surfel-based
representation also naturally supports strongly varying scan resolution. In
particular, it reconstructs colors at the input camera's resolution. Moreover,
in contrast to many volumetric approaches, ours can reconstruct thin objects
since objects do not need to enclose a volume. We demonstrate our approach in a
number of experiments, showing that it produces reconstructions that are
competitive with the state-of-the-art, and we discuss its advantages and
limitations. The algorithm (excluding loop closure functionality) is available
as open source at https://github.com/puzzlepaint/surfelmeshing .Comment: Version accepted to IEEE Transactions on Pattern Analysis and Machine
Intelligenc
3D MODELING of A COMPLEX BUILDING: From MULTI-VIEW IMAGE FUSION to GOOGLE EARTH PUBLICATION
This paper presents a pipeline that aims at illustrating the procedure to realize a 3D model of a complex building integrating the UAV and terrestrial images and modifying the 3D model in order to publish to Google Earth in an interactive modality so as to provide better available models for visualization and use. The main steps of the procedure are the optimization of the UAV flight, the integration of the different UAV and ground floor images and the optimization of the model to be published to GE. The case study has been identified in a building, The Eremo di Santa Rosalia Convent in Sicily which hash more staggered elevations and located in the hills of the hinterland and of which, the online platform only indicate the position on Google Maps (GM) and Google Earth (GE) with a photo from above and a non-urban road whose GM path is not corresponding with the GE photo. The process highlights the integration of the models and showcases a workflow for the publication of the combined 3D model to the GE platform
Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery
In this paper we discuss the potential and challenges regarding SAR-optical
stereogrammetry for urban areas, using very-high-resolution (VHR) remote
sensing imagery. Since we do this mainly from a geometrical point of view, we
first analyze the height reconstruction accuracy to be expected for different
stereogrammetric configurations. Then, we propose a strategy for simultaneous
tie point matching and 3D reconstruction, which exploits an epipolar-like
search window constraint. To drive the matching and ensure some robustness, we
combine different established handcrafted similarity measures. For the
experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and
MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR
imagery is generally feasible with 3D positioning accuracies in the
meter-domain, although the matching of these strongly hetereogeneous
multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar
(SAR), optical images, remote sensing, data fusion, stereogrammetr
Extended Object Tracking: Introduction, Overview and Applications
This article provides an elaborate overview of current research in extended
object tracking. We provide a clear definition of the extended object tracking
problem and discuss its delimitation to other types of object tracking. Next,
different aspects of extended object modelling are extensively discussed.
Subsequently, we give a tutorial introduction to two basic and well used
extended object tracking approaches - the random matrix approach and the Kalman
filter-based approach for star-convex shapes. The next part treats the tracking
of multiple extended objects and elaborates how the large number of feasible
association hypotheses can be tackled using both Random Finite Set (RFS) and
Non-RFS multi-object trackers. The article concludes with a summary of current
applications, where four example applications involving camera, X-band radar,
light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are
highlighted.Comment: 30 pages, 19 figure
- âŠ