3,900 research outputs found

    Fast Local Tone Mapping, Summed-Area Tables and Mesopic Vision Simulation

    Get PDF
    広島大学(Hiroshima University)博士(工学)Engineeringdoctora

    Quantitative assessment of emergent biomass and species composition in tidal wetlands using remote sensing

    Get PDF
    There are no author-identified significant results in this report

    Testing HDR image rendering algorithms

    Get PDF
    Eight high-dynamic-range image rendering algorithms were tested using ten high-dynamic-range pictorial images. A large-scale paired comparison psychophysical experiment was developed containing two sections, comparing the overall rendering performances and grayscale tone mapping performance respectively. An interval scale of preference was created to evaluate the rendering results. The results showed the consistency of tone-mapping performance with the overall rendering results, and illustrated that Durand and Dorsey’s bilateral fast filtering technique and Reinhard’s photographic tone reproduction have the best rendering performance overall. The goal of this experiment was to establish a sound testing and evaluation methodology based on psychophysical experiment results for future research on accuracy of rendering algorithms

    Pilot Tests of Satellite Snowcover/Runoff Forecasting Systems

    Get PDF
    Major snow zones of the western U.S. were selected to test the capability of satellite systems for mapping snowcover in various snow, cloud, climatic, and vegetation regimes. Different satellite snowcover analysis methods used in each area are described along with results

    Inverse tone mapping

    Get PDF
    The introduction of High Dynamic Range Imaging in computer graphics has produced a novelty in Imaging that can be compared to the introduction of colour photography or even more. Light can now be captured, stored, processed, and finally visualised without losing information. Moreover, new applications that can exploit physical values of the light have been introduced such as re-lighting of synthetic/real objects, or enhanced visualisation of scenes. However, these new processing and visualisation techniques cannot be applied to movies and pictures that have been produced by photography and cinematography in more than one hundred years. This thesis introduces a general framework for expanding legacy content into High Dynamic Range content. The expansion is achieved avoiding artefacts, producing images suitable for visualisation and re-lighting of synthetic/real objects. Moreover, it is presented a methodology based on psychophysical experiments and computational metrics to measure performances of expansion algorithms. Finally, a compression scheme, inspired by the framework, for High Dynamic Range Textures, is proposed and evaluated

    Selected bibliography of remote sensing

    Get PDF
    Bibliography of remote sensing techniques for analysis and assimilation of geographic dat

    Deep Bilateral Learning for Real-Time Image Enhancement

    Get PDF
    Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201
    corecore