580 research outputs found
Iterative Prompt Learning for Unsupervised Backlit Image Enhancement
We propose a novel unsupervised backlit image enhancement method, abbreviated
as CLIP-LIT, by exploring the potential of Contrastive Language-Image
Pre-Training (CLIP) for pixel-level image enhancement. We show that the
open-world CLIP prior not only aids in distinguishing between backlit and
well-lit images, but also in perceiving heterogeneous regions with different
luminance, facilitating the optimization of the enhancement network. Unlike
high-level and image manipulation tasks, directly applying CLIP to enhancement
tasks is non-trivial, owing to the difficulty in finding accurate prompts. To
solve this issue, we devise a prompt learning framework that first learns an
initial prompt pair by constraining the text-image similarity between the
prompt (negative/positive sample) and the corresponding image (backlit
image/well-lit image) in the CLIP latent space. Then, we train the enhancement
network based on the text-image similarity between the enhanced result and the
initial prompt pair. To further improve the accuracy of the initial prompt
pair, we iteratively fine-tune the prompt learning framework to reduce the
distribution gaps between the backlit images, enhanced results, and well-lit
images via rank learning, boosting the enhancement performance. Our method
alternates between updating the prompt learning framework and enhancement
network until visually pleasing results are achieved. Extensive experiments
demonstrate that our method outperforms state-of-the-art methods in terms of
visual quality and generalization ability, without requiring any paired data.Comment: Accepted to ICCV 2023 as Oral. Project page:
https://zhexinliang.github.io/CLIP_LIT_page
Division Gets Better: Learning Brightness-Aware and Detail-Sensitive Representations for Low-Light Image Enhancement
Low-light image enhancement strives to improve the contrast, adjust the
visibility, and restore the distortion in color and texture. Existing methods
usually pay more attention to improving the visibility and contrast via
increasing the lightness of low-light images, while disregarding the
significance of color and texture restoration for high-quality images. Against
above issue, we propose a novel luminance and chrominance dual branch network,
termed LCDBNet, for low-light image enhancement, which divides low-light image
enhancement into two sub-tasks, e.g., luminance adjustment and chrominance
restoration. Specifically, LCDBNet is composed of two branches, namely
luminance adjustment network (LAN) and chrominance restoration network (CRN).
LAN takes responsibility for learning brightness-aware features leveraging
long-range dependency and local attention correlation. While CRN concentrates
on learning detail-sensitive features via multi-level wavelet decomposition.
Finally, a fusion network is designed to blend their learned features to
produce visually impressive images. Extensive experiments conducted on seven
benchmark datasets validate the effectiveness of our proposed LCDBNet, and the
results manifest that LCDBNet achieves superior performance in terms of
multiple reference/non-reference quality evaluators compared to other
state-of-the-art competitors. Our code and pretrained model will be available.Comment: 14 pages, 16 figure
DiffLLE: Diffusion-guided Domain Calibration for Unsupervised Low-light Image Enhancement
Existing unsupervised low-light image enhancement methods lack enough
effectiveness and generalization in practical applications. We suppose this is
because of the absence of explicit supervision and the inherent gap between
real-world scenarios and the training data domain. In this paper, we develop
Diffusion-based domain calibration to realize more robust and effective
unsupervised Low-Light Enhancement, called DiffLLE. Since the diffusion model
performs impressive denoising capability and has been trained on massive clean
images, we adopt it to bridge the gap between the real low-light domain and
training degradation domain, while providing efficient priors of real-world
content for unsupervised models. Specifically, we adopt a naive unsupervised
enhancement algorithm to realize preliminary restoration and design two
zero-shot plug-and-play modules based on diffusion model to improve
generalization and effectiveness. The Diffusion-guided Degradation Calibration
(DDC) module narrows the gap between real-world and training low-light
degradation through diffusion-based domain calibration and a lightness
enhancement curve, which makes the enhancement model perform robustly even in
sophisticated wild degradation. Due to the limited enhancement effect of the
unsupervised model, we further develop the Fine-grained Target domain
Distillation (FTD) module to find a more visual-friendly solution space. It
exploits the priors of the pre-trained diffusion model to generate
pseudo-references, which shrinks the preliminary restored results from a coarse
normal-light domain to a finer high-quality clean field, addressing the lack of
strong explicit supervision for unsupervised methods. Benefiting from these,
our approach even outperforms some supervised methods by using only a simple
unsupervised baseline. Extensive experiments demonstrate the superior
effectiveness of the proposed DiffLLE
Subspecific Designation of the U.S.A. Interior Highlands Population of \u3ci\u3eArgynnis\u3c/i\u3e (\u3ci\u3eSpeyeria\u3c/i\u3e) \u3ci\u3ediana\u3c/i\u3e (Cramer, 1777) (Nymphalidae: Heliconiinae: Argynnini: Argynnina)
Subspecific designation is designated for the North American Interior Highlands population of Argynnis diana, based on four factors: mtDNA haplotype differences from nominotypical A. diana of the Appalachian Mountains; wing shape difference in the males between both regions; wing size of the adults; and tendency for females of the Interior Highlands to show tan coloration in the submarginal row of rectangular spots of the subapical region of the dorsal forewings
Digital Cinemas
Digital technologies have altered the terms of all the major areas of film: finance, production and post-production, distribution and consumption. This chapter looks at these through the lens of debates over historical continuities and discontinuities in each area, looking especially at the variety of production and dissemination formats from blockbusters to zero-budget film-making, the impact of technical standardisation, the affordances of different digital technologies, notably bitmap and vector, and the role of platforms like YouTube and their alternatives. The possibilities and challenges of 'world cinema' as a concept in this terrain will be the central them
ALL-E: Aesthetics-guided Low-light Image Enhancement
Evaluating the performance of low-light image enhancement (LLE) is highly
subjective, thus making integrating human preferences into image enhancement a
necessity. Existing methods fail to consider this and present a series of
potentially valid heuristic criteria for training enhancement models. In this
paper, we propose a new paradigm, i.e., aesthetics-guided low-light image
enhancement (ALL-E), which introduces aesthetic preferences to LLE and
motivates training in a reinforcement learning framework with an aesthetic
reward. Each pixel, functioning as an agent, refines itself by recursive
actions, i.e., its corresponding adjustment curve is estimated sequentially.
Extensive experiments show that integrating aesthetic assessment improves both
subjective experience and objective evaluation. Our results on various
benchmarks demonstrate the superiority of ALL-E over state-of-the-art methods.
Source code and models are in the project page
- …