651 research outputs found

    Fast and Accurate Depth Estimation from Sparse Light Fields

    Get PDF
    We present a fast and accurate method for dense depth reconstruction from sparsely sampled light fields obtained using a synchronized camera array. In our method, the source images are over-segmented into non-overlapping compact superpixels that are used as basic data units for depth estimation and refinement. Superpixel representation provides a desirable reduction in the computational cost while preserving the image geometry with respect to the object contours. Each superpixel is modeled as a plane in the image space, allowing depth values to vary smoothly within the superpixel area. Initial depth maps, which are obtained by plane sweeping, are iteratively refined by propagating good correspondences within an image. To ensure the fast convergence of the iterative optimization process, we employ a highly parallel propagation scheme that operates on all the superpixels of all the images at once, making full use of the parallel graphics hardware. A few optimization iterations of the energy function incorporating superpixel-wise smoothness and geometric consistency constraints allows to recover depth with high accuracy in textured and textureless regions as well as areas with occlusions, producing dense globally consistent depth maps. We demonstrate that while the depth reconstruction takes about a second per full high-definition view, the accuracy of the obtained depth maps is comparable with the state-of-the-art results.Comment: 15 pages, 15 figure

    AN INCREMENTAL BASED APPROACH FOR 3D MULTI-ANGLE POINT CLOUD STITCHING USING ICP AND KNN

    Get PDF
    The basic principle of stitching is joining or merging any two materials or objects. 3D point cloud stitching is basically stitching two 3D point cloud together. 3D point cloud stitching is an emerging topic and there are multiple ways to achieve it. There are various methods for stitching which all have changes throughout the time. The existing methods do have shortcomings and have ignored the multiangle stitching of a same model or an object. This shortfall leads to many deficiencies in the ability of a stitching algorithm to maintain accuracy over the period. In this work I have introduced a new approach for an iterative based approach for 3d multi-angle point cloud stitching using ICP (Iterative closest point algorithm) and KNN (K-nearest neighbor). The design follows an incremental approach to achieve the results. This is a novel approach of stitching multiple 3D point clouds taken from multiple angles of a single bust. The framework is evaluated based on the stitching results provided by the algorithm capability of stitching multiple point cloud into a solid model

    Development of a SGM-based multi-view reconstruction framework for aerial imagery

    Get PDF
    Advances in the technology of digital airborne camera systems allow for the observation of surfaces with sampling rates in the range of a few centimeters. In combination with novel matching approaches, which estimate depth information for virtually every pixel, surface reconstructions of impressive density and precision can be generated. Therefore, image based surface generation meanwhile is a serious alternative to LiDAR based data collection for many applications. Surface models serve as primary base for geographic products as for example map creation, production of true-ortho photos or visualization purposes within the framework of virtual globes. The goal of the presented theses is the development of a framework for the fully automatic generation of 3D surface models based on aerial images - both standard nadir as well as oblique views. This comprises several challenges. On the one hand dimensions of aerial imagery is considerable and the extend of the areas to be reconstructed can encompass whole countries. Beside scalability of methods this also requires decent processing times and efficient handling of the given hardware resources. Moreover, beside high precision requirements, a high degree of automation has to be guaranteed to limit manual interaction as much as possible. Due to the advantages of scalability, a stereo method is utilized in the presented thesis. The approach for dense stereo is based on an adapted version of the semi global matching (SGM) algorithm. Following a hierarchical approach corresponding image regions and meaningful disparity search ranges are identified. It will be verified that, dependent on undulations of the scene, time and memory demands can be reduced significantly, by up to 90% within some of the conducted tests. This enables the processing of aerial datasets on standard desktop machines in reasonable times even for large fields of depth. Stereo approaches generate disparity or depth maps, in which redundant depth information is available. To exploit this redundancy, a method for the refinement of stereo correspondences is proposed. Thereby redundant observations across stereo models are identified, checked for geometric consistency and their reprojection error is minimized. This way outliers are removed and precision of depth estimates is improved. In order to generate consistent surfaces, two algorithms for depth map fusion were developed. The first fusion strategy aims for the generation of 2.5D height models, also known as digital surface models (DSM). The proposed method improves existing methods regarding quality in areas of depth discontinuities, for example at roof edges. Utilizing benchmarks designed for the evaluation of image based DSM generation we show that the developed approaches favorably compare to state-of-the-art algorithms and that height precisions of few GSDs can be achieved. Furthermore, methods for the derivation of meshes based on DSM data are discussed. The fusion of depth maps for 3D scenes, as e.g. frequently required during evaluation of high resolution oblique aerial images in complex urban environments, demands for a different approach since scenes can in general not be represented as height fields. Moreover, depths across depth maps possess varying precision and sampling rates due to variances in image scale, errors in orientation and other effects. Within this thesis a median-based fusion methodology is proposed. By using geometry-adaptive triangulation of depth maps depth-wise normals are extracted and, along the point coordinates are filtered and fused using tree structures. The output of this method are oriented points which then can be used to generate meshes. Precision and density of the method will be evaluated using established multi-view benchmarks. Beside the capability to process close range datasets, results for large oblique airborne data sets will be presented. The report closes with a summary, discussion of limitations and perspectives regarding improvements and enhancements. The implemented algorithms are core elements of the commercial software package SURE, which is freely available for scientific purposes

    From Calibration to Large-Scale Structure from Motion with Light Fields

    Get PDF
    Classic pinhole cameras project the multi-dimensional information of the light flowing through a scene onto a single 2D snapshot. This projection limits the information that can be reconstructed from the 2D acquisition. Plenoptic (or light field) cameras, on the other hand, capture a 4D slice of the plenoptic function, termed the “light field”. These cameras provide both spatial and angular information on the light flowing through a scene; multiple views are captured in a single photographic exposure facilitating various applications. This thesis is concerned with the modelling of light field (or plenoptic) cameras and the development of structure from motion pipelines using such cameras. Specifically, we develop a geometric model for a multi-focus plenoptic camera, followed by a complete pipeline for the calibration of the suggested model. Given a calibrated light field camera, we then remap the captured light field to a grid of pinhole images. We use these images to obtain metric 3D reconstruction through a novel framework for structure from motion with light fields. Finally, we suggest a linear and efficient approach for absolute pose estimation for light fields

    Spatial and Temporal Patterns of Sympatric Bobcats \u3cem\u3e(Lynx Rufus)\u3c/em\u3e and Coyotes \u3cem\u3e(Canis Latrans)\u3c/em\u3e in an Agricultural Landscape

    Get PDF
    Bobcat (Lynx rufus) populations experienced declines in the Midwest during the 20th century due to land conversion for agriculture and over exploitation and were practically nonexistent in areas by the 1970-80s. Populations have been recovering following changes in land-use practices and habitat improvement. Eastern South Dakota was closed to bobcat harvest in 1977 but reopened in 2012 to select counties. Bobcats are elusive, have large home ranges, and occur at low densities, making monitoring their populations difficult. Camera trapping is an effective tool for monitoring elusive carnivores but can be burdened by low detection rates. Researchers often employ attractants to increase detection, but attractants can unequally influence detection of species among different trophic levels. We ran a pilot season in 2019 to evaluate the efficacy of an olfactory lure, a non-consumable attractant, as a means of increasing detection of bobcats. We expanded our species of interest to include additional species (coyote [Canis latrans], raccoon [Procyon lotor], and eastern cottontail [Sylvilagus floridanus]) that represented a range of foraging guilds. We evaluated the influence of the lure at three temporal scales (i.e., daily probability of detection, sequences per detection, and triggers per sequence). The influence of the lure varied between the two most carnivorous species, bobcat and coyote. The lure positively influenced detection of coyote and raccoon, an intermediate omnivore, and negatively influenced detection of bobcat and eastern cottontail, an herbivorous prey. Bobcats are of management interest in South Dakota that are potentially vulnerable to land conversion and may be influenced by coyotes. We used occupancy modeling to evaluate the influences of landscape features on space use of bobcats and coyotes and generated activity curves to quantify temporal overlap between species using remote camera data collected in the summers of 2019 and 2020. Coyote space use was positively associated with slope, small-scale percent agriculture, and edge density. Bobcat space use was limited and positively associated with coyote activity, distance to roads, and large-scale percent woodland/shrubland. Our results indicate that bobcats are using smaller, less-disturbed woodland/shrubland patches, which are associated with higher coyote activity levels. Bobcat and coyote temporal activity had high overlap

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF

    Event-based neuromorphic stereo vision

    Full text link

    VIINTER: View Interpolation with Implicit Neural Representations of Images

    Full text link
    We present VIINTER, a method for view interpolation by interpolating the implicit neural representation (INR) of the captured images. We leverage the learned code vector associated with each image and interpolate between these codes to achieve viewpoint transitions. We propose several techniques that significantly enhance the interpolation quality. VIINTER signifies a new way to achieve view interpolation without constructing 3D structure, estimating camera poses, or computing pixel correspondence. We validate the effectiveness of VIINTER on several multi-view scenes with different types of camera layout and scene composition. As the development of INR of images (as opposed to surface or volume) has centered around tasks like image fitting and super-resolution, with VIINTER, we show its capability for view interpolation and offer a promising outlook on using INR for image manipulation tasks.Comment: SIGGRAPH Asia 202

    Effective Features for No-Reference Image Quality Assessment on Mobile Devices

    Get PDF
    The goal of this thesis is the analysis and development of a no-reference image quality assessment algorithm. Algorithms of this kind are increasingly employed in multimedia applications with the aim of delivering higher quality of service. In order to achieve the goal, a state-of-art no-reference algorithm was used as a groundwork to improve. The proposed model is intended to be deployed in low-resources mobile devices such as smartphones and tablet
    • …
    corecore