840 research outputs found
Illumination Estimation Based Color to Grayscale Conversion Algorithms
In this paper, a new adaptive approach, namelythe illumination estimation approach is introduced into the colorto grayscale conversion technique. In this approach, someassumptions will be made to calculate the weight contribution ofred, green, and blue components during the conversion process.Two color to grayscale conversion algorithms are developedunder this approach, namely the Gray World Assumption Colorto Grayscale Conversion (GWACG) and Shade of GrayAssumption Color to Grayscale (SGACG) conversion algorithms.Based on the extensive experimental results, the proposedalgorithms outperform the conventional conversion techniquesby producing resultant grayscale images with higher brightness,contrast, and amount of details preserved. For this reason, theseproposed algorithms are suitable for pre- and post- processing ofdigital images
Rank-Based Illumination Estimation
A new two-stage illumination estimation method based on the concept of rank is presented. The method first estimates the illuminant locally in subwindows using a ranking of digital counts in each color channel and then combines local subwindow estimates again based on a ranking of the local estimates. The proposed method unifies the MaxRGB and Grayworld methods. Despite its simplicity, the performance of the method is found to be competitive with other state-of-the art methods for estimating the chromaticity of the overall scene illumination
EMLight: Lighting Estimation via Spherical Distribution Approximation
Illumination estimation from a single image is critical in 3D rendering and
it has been investigated extensively in the computer vision and computer
graphic research community. On the other hand, existing works estimate
illumination by either regressing light parameters or generating illumination
maps that are often hard to optimize or tend to produce inaccurate predictions.
We propose Earth Mover Light (EMLight), an illumination estimation framework
that leverages a regression network and a neural projector for accurate
illumination estimation. We decompose the illumination map into spherical light
distribution, light intensity and the ambient term, and define the illumination
estimation as a parameter regression task for the three illumination
components. Motivated by the Earth Mover distance, we design a novel spherical
mover's loss that guides to regress light distribution parameters accurately by
taking advantage of the subtleties of spherical distribution. Under the
guidance of the predicted spherical distribution, light intensity and ambient
term, the neural projector synthesizes panoramic illumination maps with
realistic light frequency. Extensive experiments show that EMLight achieves
accurate illumination estimation and the generated relighting in 3D object
embedding exhibits superior plausibility and fidelity as compared with
state-of-the-art methods.Comment: Accepted to AAAI 202
Illumination Estimation from Dichromatic Planes
Adopting the dichromatic reflection model under the assumption of neutral interface reflection, the color of the illuminating light can be estimated by intersecting the planes that the color response of two or more different materials describe. From the color response of any given region, most approaches estimate a single plane on the assumption that only a single material is imaged. This assumption, however, is often violated in cluttered scenes. In this paper, rather than a single planar model, several coexisting planes are used to explain the observed color response. In estimating the illuminant, a set of candidate lights is assessed for goodness of fit given the assumed number of coexisting planes. The candidate light giving the minimum error fit is then chosen as representative of the scene illuminant. The performance of the proposed approach is explored on real images
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
Deep Quantigraphic Image Enhancement via Comparametric Equations
Most recent methods of deep image enhancement can be generally classified
into two types: decompose-and-enhance and illumination estimation-centric. The
former is usually less efficient, and the latter is constrained by a strong
assumption regarding image reflectance as the desired enhancement result. To
alleviate this constraint while retaining high efficiency, we propose a novel
trainable module that diversifies the conversion from the low-light image and
illumination map to the enhanced image. It formulates image enhancement as a
comparametric equation parameterized by a camera response function and an
exposure compensation ratio. By incorporating this module in an illumination
estimation-centric DNN, our method improves the flexibility of deep image
enhancement, limits the computational burden to illumination estimation, and
allows for fully unsupervised learning adaptable to the diverse demands of
different tasks.Comment: Published in ICASSP 2023. For GitHub code, see
https://github.com/nttcslab/con
- …