2,479 research outputs found

    Understanding camera trade-offs through a Bayesian analysis of light field projections

    Get PDF
    Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies.This paper introduces a unified framework for analyzing computational imagingapproaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a new prior on light field signals, estimatethe original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type andanalyze their limitations

    Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision

    Get PDF
    Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinholeoptics. The development of computational imaging is broadening thisscope; a variety of unconventional cameras do not directly capture atraditional image anymore, but instead require the jointreconstruction of structure and image information. For example, recentcoded aperture designs have been optimized to facilitate the jointreconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied bydifferent strategies. This paper introduces a unified framework for analyzing computational imaging approaches.Each sensor element is modeled as an inner product over the 4D light field.The imaging task is then posed as Bayesian inference: giventhe observed noisy light field projections and a new prior on light field signals, estimate the original light field. Under common imaging conditions, we compare theperformance of various camera designs using 2D light field simulations. Thisframework allows us to better understand the tradeoffs of each camera type and analyze their limitations

    Compressive light field photography using overcomplete dictionaries and optimized projections

    Get PDF
    Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.Natural Sciences and Engineering Research Council of Canada (NSERC postdoctoral fellowship)United States. Defense Advanced Research Projects Agency (DARPA SCENICC program)Alfred P. Sloan Foundation (Sloan Research Fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award

    Learning Light Field Angular Super-Resolution via a Geometry-Aware Network

    Full text link
    The acquisition of light field images with high angular resolution is costly. Although many methods have been proposed to improve the angular resolution of a sparsely-sampled light field, they always focus on the light field with a small baseline, which is captured by a consumer light field camera. By making full use of the intrinsic \textit{geometry} information of light fields, in this paper we propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline. Our model consists of two learnable modules and a physically-based module. Specifically, it includes a depth estimation module for explicitly modeling the scene geometry, a physically-based warping for novel views synthesis, and a light field blending module specifically designed for light field reconstruction. Moreover, we introduce a novel loss function to promote the preservation of the light field parallax structure. Experimental results over various light field datasets including large baseline light field images demonstrate the significant superiority of our method when compared with state-of-the-art ones, i.e., our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48×\times. In addition, our method preserves the light field parallax structure better.Comment: This paper was accepted by AAAI 202

    Improving ecological forecasts using model and data constraints

    Get PDF
    Terrestrial ecosystems are essential to human well-being, but their future remains highly uncertain, as evidenced by the huge disparities in model projections of the land carbon sink. The existence of these disparities despite the recent explosion of novel data streams, including the TRY plant traits database, the Landsat archive, and global eddy covariance tower networks, suggests that these data streams are not being utilized to their full potential by the terrestrial ecosystem modeling community. Therefore, the overarching objective of my dissertation is to identify how these various data streams can be used to improve the precision of model predictions by constraining model parameters. In chapter 1, I use a hierarchical multivariate meta-analysis of the TRY database to assess the dependence of trait correlations on ecological scale and evaluate the utility of these correlations for constraining ecosystem model parameters. I find that global trait correlations are generally consistent within plant functional types, and leveraging the multivariate trait space is an effective way to constrain trait estimates for data-limited traits and plant functional types. My next two chapters assess the ability to measure traits using remote sensing by exploring the links between leaf traits and reflectance spectra. In chapter 2, I introduce a method for estimating traits from spectra via radiative transfer model inversion. I then use this approach to show that although the precise location, width, and quantity of spectral bands significantly affects trait retrieval accuracy, a wide range of sensor configurations are capable of providing trait information. In chapter 3, I apply this approach to a large database of leaf spectra to show that traits vary as much within as across species, and much more across species within a functional type than across functional types. Finally, in chapter 4, I synthesize the findings of the previous chapters to calibrate a vegetation model's representation of canopy radiative transfer against observed remotely-sensed surface reflectance. Although the calibration successfully constrained canopy structural parameters, I identify issues with model representations of wood and soil reflectance that inhibit its ability to accurately reproduce remote sensing observations
    • …
    corecore