70,766 research outputs found

    DeMoN: Depth and Motion Network for Learning Monocular Stereo

    Full text link
    In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.Comment: Camera ready version for CVPR 2017. Supplementary material included. Project page: http://lmb.informatik.uni-freiburg.de/people/ummenhof/depthmotionnet

    Using Self-Contradiction to Learn Confidence Measures in Stereo Vision

    Get PDF
    Learned confidence measures gain increasing importance for outlier removal and quality improvement in stereo vision. However, acquiring the necessary training data is typically a tedious and time consuming task that involves manual interaction, active sensing devices and/or synthetic scenes. To overcome this problem, we propose a new, flexible, and scalable way for generating training data that only requires a set of stereo images as input. The key idea of our approach is to use different view points for reasoning about contradictions and consistencies between multiple depth maps generated with the same stereo algorithm. This enables us to generate a huge amount of training data in a fully automated manner. Among other experiments, we demonstrate the potential of our approach by boosting the performance of three learned confidence measures on the KITTI2012 dataset by simply training them on a vast amount of automatically generated training data rather than a limited amount of laser ground truth data.Comment: This paper was accepted to the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. The copyright was transfered to IEEE (https://www.ieee.org). The official version of the paper will be made available on IEEE Xplore (R) (http://ieeexplore.ieee.org). This version of the paper also contains the supplementary material, which will not appear IEEE Xplore (R

    Optical Coherence Tomography Angiography Vessel Density in Healthy, Glaucoma Suspect, and Glaucoma Eyes.

    Get PDF
    PurposeThe purpose of this study was to compare retinal nerve fiber layer (RNFL) thickness and optical coherence tomography angiography (OCT-A) retinal vasculature measurements in healthy, glaucoma suspect, and glaucoma patients.MethodsTwo hundred sixty-one eyes of 164 healthy, glaucoma suspect, and open-angle glaucoma (OAG) participants from the Diagnostic Innovations in Glaucoma Study with good quality OCT-A images were included. Retinal vasculature information was summarized as a vessel density map and as vessel density (%), which is the proportion of flowing vessel area over the total area evaluated. Two vessel density measurements extracted from the RNFL were analyzed: (1) circumpapillary vessel density (cpVD) measured in a 750-μm-wide elliptical annulus around the disc and (2) whole image vessel density (wiVD) measured over the entire image. Areas under the receiver operating characteristic curves (AUROC) were used to evaluate diagnostic accuracy.ResultsAge-adjusted mean vessel density was significantly lower in OAG eyes compared with glaucoma suspects and healthy eyes. (cpVD: 55.1 ± 7%, 60.3 ± 5%, and 64.2 ± 3%, respectively; P < 0.001; and wiVD: 46.2 ± 6%, 51.3 ± 5%, and 56.6 ± 3%, respectively; P < 0.001). For differentiating between glaucoma and healthy eyes, the age-adjusted AUROC was highest for wiVD (0.94), followed by RNFL thickness (0.92) and cpVD (0.83). The AUROCs for differentiating between healthy and glaucoma suspect eyes were highest for wiVD (0.70), followed by cpVD (0.65) and RNFL thickness (0.65).ConclusionsOptical coherence tomography angiography vessel density had similar diagnostic accuracy to RNFL thickness measurements for differentiating between healthy and glaucoma eyes. These results suggest that OCT-A measurements reflect damage to tissues relevant to the pathophysiology of OAG

    Subjective Annotation for a Frame Interpolation Benchmark using Artefact Amplification

    Get PDF
    Current benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduced from such simple measures. Hence, we conducted a subjective quality assessment crowdscouring study for the interpolated frames provided by one of the optical flow benchmarks, the Middlebury benchmark. We collected forced-choice paired comparisons between interpolated images and corresponding ground truth. To increase the sensitivity of observers when judging minute difference in paired comparisons we introduced a new method to the field of full-reference quality assessment, called artefact amplification. From the crowdsourcing data, we reconstructed absolute quality scale values according to Thurstone's model. As a result, we obtained a re-ranking of the 155 participating algorithms w.r.t. the visual quality of the interpolated frames. This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images. As a first step, we proposed such a new full-reference method, called WAE-IQA. By weighing the local differences between an interpolated image and its ground truth WAE-IQA performed slightly better than the currently best FR-IQA approach from the literature.Comment: arXiv admin note: text overlap with arXiv:1901.0536
    • …
    corecore