11,730 research outputs found
MISR stereoscopic image matchers: techniques and results
The Multi-angle Imaging SpectroRadiometer (MISR) instrument, launched in December 1999 on the NASA EOS Terra satellite, produces images in the red band at 275-m resolution, over a swath width of 360 km, for the nine camera angles 70.5/spl deg/, 60/spl deg/, 45.6/spl deg/, and 26.1/spl deg/ forward, nadir, and 26.1/spl deg/, 45.6/spl deg/, 60/spl deg/, and 70.5/spl deg/ aft. A set of accurate and fast algorithms was developed for automated stereo matching of cloud features to obtain cloud-top height and motion over the nominal six-year lifetime of the mission. Accuracy and speed requirements necessitated the use of a combination of area-based and feature-based stereo-matchers with only pixel-level acuity. Feature-based techniques are used for cloud motion retrieval with the off-nadir MISR camera views, and the motion is then used to provide a correction to the disparities used to measure cloud-top heights which are derived from the innermost three cameras. Intercomparison with a previously developed "superstereo" matcher shows that the results are very comparable in accuracy with much greater coverage and at ten times the speed. Intercomparison of feature-based and area-based techniques shows that the feature-based techniques are comparable in accuracy at a factor of eight times the speed. An assessment of the accuracy of the area-based matcher for cloud-free scenes demonstrates the accuracy and completeness of the stereo-matcher. This trade-off has resulted in the loss of a reliable quality metric to predict accuracy and a slightly high blunder rate. Examples are shown of the application of the MISR stereo-matchers on several difficult scenes which demonstrate the efficacy of the matching approach
Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts
This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies
Guided Stereo Matching
Stereo is a prominent technique to infer dense depth maps from images, and
deep learning further pushed forward the state-of-the-art, making end-to-end
architectures unrivaled when enough data is available for training. However,
deep networks suffer from significant drops in accuracy when dealing with new
environments. Therefore, in this paper, we introduce Guided Stereo Matching, a
novel paradigm leveraging a small amount of sparse, yet reliable depth
measurements retrieved from an external source enabling to ameliorate this
weakness. The additional sparse cues required by our method can be obtained
with any strategy (e.g., a LiDAR) and used to enhance features linked to
corresponding disparity hypotheses. Our formulation is general and fully
differentiable, thus enabling to exploit the additional sparse inputs in
pre-trained deep stereo networks as well as for training a new instance from
scratch. Extensive experiments on three standard datasets and two
state-of-the-art deep architectures show that even with a small set of sparse
input cues, i) the proposed paradigm enables significant improvements to
pre-trained networks. Moreover, ii) training from scratch notably increases
accuracy and robustness to domain shifts. Finally, iii) it is suited and
effective even with traditional stereo algorithms such as SGM.Comment: CVPR 201
Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction
The ultimate goal of many image-based modeling systems is to render
photo-realistic novel views of a scene without visible artifacts. Existing
evaluation metrics and benchmarks focus mainly on the geometric accuracy of the
reconstructed model, which is, however, a poor predictor of visual accuracy.
Furthermore, using only geometric accuracy by itself does not allow evaluating
systems that either lack a geometric scene representation or utilize coarse
proxy geometry. Examples include light field or image-based rendering systems.
We propose a unified evaluation approach based on novel view prediction error
that is able to analyze the visual quality of any method that can render novel
views from input images. One of the key advantages of this approach is that it
does not require ground truth geometry. This dramatically simplifies the
creation of test datasets and benchmarks. It also allows us to evaluate the
quality of an unknown scene during the acquisition and reconstruction process,
which is useful for acquisition planning. We evaluate our approach on a range
of methods including standard geometry-plus-texture pipelines as well as
image-based rendering techniques, compare it to existing geometry-based
benchmarks, and demonstrate its utility for a range of use cases.Comment: 10 pages, 12 figures, paper was submitted to ACM Transactions on
Graphics for revie
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
- …