49 research outputs found
Robust estimation of exposure ratios in multi-exposure image stacks
Merging multi-exposure image stacks into a high dynamic range (HDR) image
requires knowledge of accurate exposure times. When exposure times are
inaccurate, for example, when they are extracted from a camera's EXIF metadata,
the reconstructed HDR images reveal banding artifacts at smooth gradients. To
remedy this, we propose to estimate exposure ratios directly from the input
images. We derive the exposure time estimation as an optimization problem, in
which pixels are selected from pairs of exposures to minimize estimation error
caused by camera noise. When pixel values are represented in the logarithmic
domain, the problem can be solved efficiently using a linear solver. We
demonstrate that the estimation can be easily made robust to pixel misalignment
caused by camera or object motion by collecting pixels from multiple spatial
tiles. The proposed automatic exposure estimation and alignment eliminates
banding artifacts in popular datasets and is essential for applications that
require physically accurate reconstructions, such as measuring the modulation
transfer function of a display. The code for the method is available.Comment: 11 pages, 11 figures, journa
Recommended from our members
Perceptual model for adaptive local shading and refresh rate
When the rendering budget is limited by power or time, it is necessary to find the combination of rendering parameters, such as resolution and refresh rate, that could deliver the best quality. Variable-rate shading (VRS), introduced in the last generations of GPUs, enables fine control of the rendering quality, in which each 16×16 image tile can be rendered with a different ratio of shader executions. We take advantage of this capability and propose a new method for adaptive control of local shading and refresh rate. The method analyzes texture content, on-screen velocities, luminance, and effective resolution and suggests the refresh rate and a VRS state map that maximizes the quality of animated content under a limited budget. The method is based on the new content-adaptive metric of judder, aliasing, and blur, which is derived from the psychophysical models of contrast sensitivity. To calibrate and validate the metric, we gather data from literature and also collect new measurements of motion quality under variable shading rates, different velocities of motion, texture content, and display capabilities, such as refresh rate, persistence, and angular resolution. The proposed metric and adaptive shading method is implemented as a game engine plugin. Our experimental validation shows a substantial increase in preference of our method over rendering with a fixed resolution and refresh rate, and an existing motion-adaptive techniqu
Recommended from our members
Analysis of reported error in Monte Carlo rendered images.
Evaluating image quality in Monte Carlo rendered images is an important aspect of the rendering process as we often need to determine the relative quality between images computed using different algorithms and with varying amounts of computation. The use of a gold-standard, reference image, or ground truth is a common method to provide a baseline with which to compare experimental results. We show that if not chosen carefully, the quality of reference images used for image quality assessment can skew results leading to significant misreporting of error. We present an analysis of error in Monte Carlo rendered images and discuss practices to avoid or be aware of when designing an experiment
A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution
Limited GPU performance budgets and transmission bandwidths mean that real-time rendering often has to compromise on the spatial resolution or temporal resolution (refresh rate). A common practice is to keep either the resolution or the refresh rate constant and dynamically control the other variable. But this strategy is non-optimal when the velocity of displayed content varies. To find the best trade-off between the spatial resolution and refresh rate, we propose a perceptual visual model that predicts the quality of motion given an object velocity and predictability of motion. The model considers two motion artifacts to establish an overall quality score: non-smooth (juddery) motion, and blur. Blur is modeled as a combined effect of eye motion, finite refresh rate and display resolution. To fit the free parameters of the proposed visual model, we measured eye movement for predictable and unpredictable motion, and conducted psychophysical experiments to measure the quality of motion from 50 Hz to 165 Hz. We demonstrate the utility of the model with our on-the-fly motion-adaptive rendering algorithm that adjusts the refresh rate of a G-Sync-capable monitor based on a given rendering budget and observed object motion. Our psychophysical validation experiments demonstrate that the proposed algorithm performs better than constant-refresh-rate solutions, showing that motion-adaptive rendering is an attractive technique for driving variable-refresh-rate displays.</jats:p
Analysis of reported error in Monte Carlo rendered images
Evaluating image quality in Monte Carlo rendered images is an important aspect of the rendering process as we often need to determine the relative quality between images computed using different algorithms and with varying amounts of computation. The use of a gold-standard, reference image, or ground truth (GT) is a common method to provide a baseline with which to compare experimental results. We show that if not chosen carefully the reference image can skew results leading to significant misreporting of error. We present an analysis of error in Monte Carlo rendered images and discuss practices to avoid or be aware of when designing an experiment
EUROGRAPHICS2008/G.DrettakisandR.Scopigno (GuestEditors) Volume27(2008),Number2 ModelingaGenericTone-mappingOperator RafałMantiukandHans-PeterSeidel
whilethelower-righthalfofeachimageistheresultofthesamegenerictone-mappingoperator(TMO).Theparametersofthe genericTMOmaybeadjustedtomimicabroadrangeofoperators. Althoughseveralnewtone-mappingoperatorsareproposedeachyear,thereisnoreliablemethodtovalidate theirperformanceortotellhowdifferenttheyarefromoneanother.Inordertoanalyzeandunderstandthe behavioroftone-mappingoperators,wemodeltheirmechanismsbyfittingagenericoperatortoanHDRimage anditstone-mappedLDRrendering.Wedemonstratethatthemajorityofbothglobalandlocaltone-mapping operatorscanbewellapproximatedbycomputationallyinexpensiveimageprocessingoperations,suchasaperpixeltonecurve,amodulationtransferfunctionandcolorsaturationadjustment.Theresultsproducedbysucha generictone-mappingalgorithmareoftenvisuallyindistinguishablefrommuchmoreexpensivealgorithms,such asthebilateralfilter.Weshowtheusefulnessofourgenerictone-mapperinbackward-compatibleHDRimage compression,theblack-boxanalysisofexistingtonemappingalgorithmsandthesynthesisofnewalgorithmsthat arecombinationofexistingoperators. CategoriesandSubjectDescriptors(accordingtoACMCCS): I.3.3[ComputerGraphics]:Picture/ImageGenerationDisplayalgorithms;I.4.2[ImageProcessingandComputerVision]:EnhancementGreyscalemanipulation, sharpeninganddeblurrin
Selected Problems of High Dynamic Range Video Compression and GPU-based Contrast Domain Tone Mapping
The main goal of High Dynamic Range Imaging (HDRI) is precise
reproduction of real world appearance in terms of intensity levels
and color gamut at all stages of image and video processing from
acquisition to display. In our work, we investigate the problem of
lossy HDR image and video compression and provide a number of
novel solutions, which are optimized for storage efficiency or backward
compatibility with existing compression standards. To take
advantage of HDR information even for traditional low-dynamic
range displays, we design tone mapping algorithms, which adjust
HDR contrast ranges in a scene to those available in typical display
devices
Cluster-Based Color Space Optimizations
Transformations between different color spaces and gamuts are ubiquitous operations performed on images. Often, these transformations involve information loss, for example when mapping from color to grayscale for printing, from multispectral or multiprimary data to tristimulus spaces, or from one color gamut to another. In all these applications, there exists a straightforward “natural ” mapping from the source space to the target space, but the mapping is not bijective, resulting in information loss due to metamerism and similar effects. We propose a cluster-based approach for optimizing the transformation for individual images in a way that preserves as much of the information as possible from the source space while staying as faithful as possible to the natural mapping. Our approach can be applied to a host of color transformation problems including color to gray, gamut mapping, conversion of multispectral and multiprimary data to tristimulus colors, and image optimization for color deficient viewers. 1