40,528 research outputs found
Self-Reference Deep Adaptive Curve Estimation for Low-Light Image Enhancement
In this paper, we propose a 2-stage low-light image enhancement method called
Self-Reference Deep Adaptive Curve Estimation (Self-DACE). In the first stage,
we present an intuitive, lightweight, fast, and unsupervised luminance
enhancement algorithm. The algorithm is based on a novel low-light enhancement
curve that can be used to locally boost image brightness. We also propose a new
loss function with a simplified physical model designed to preserve natural
images' color, structure, and fidelity. We use a vanilla CNN to map each pixel
through deep Adaptive Adjustment Curves (AAC) while preserving the local image
structure. Secondly, we introduce the corresponding denoising scheme to remove
the latent noise in the darkness. We approximately model the noise in the dark
and deploy a Denoising-Net to estimate and remove the noise after the first
stage. Exhaustive qualitative and quantitative analysis shows that our method
outperforms existing state-of-the-art algorithms on multiple real-world
datasets
Self-Aligned Concave Curve: Illumination Enhancement for Unsupervised Adaptation
Low light conditions not only degrade human visual experience, but also
reduce the performance of downstream machine analytics. Although many works
have been designed for low-light enhancement or domain adaptive machine
analytics, the former considers less on high-level vision, while the latter
neglects the potential of image-level signal adjustment. How to restore
underexposed images/videos from the perspective of machine vision has long been
overlooked. In this paper, we are the first to propose a learnable illumination
enhancement model for high-level vision. Inspired by real camera response
functions, we assume that the illumination enhancement function should be a
concave curve, and propose to satisfy this concavity through discrete integral.
With the intention of adapting illumination from the perspective of machine
vision without task-specific annotated data, we design an asymmetric
cross-domain self-supervised training strategy. Our model architecture and
training designs mutually benefit each other, forming a powerful unsupervised
normal-to-low light adaptation framework. Comprehensive experiments demonstrate
that our method surpasses existing low-light enhancement and adaptation methods
and shows superior generalization on various low-light vision tasks, including
classification, detection, action recognition, and optical flow estimation.
Project website: https://daooshee.github.io/SACC-Website/Comment: This paper has been accepted by ACM Multimedia 202
Weak lensing analysis of MS 1008-1224 with the VLT
We present a gravitational lensing analysis of the cluster of galaxies MS
1008-1224 (z=0.30), based on very deep observations obtained using the VLT with
FORS and ISAAC during the science verification phase. We reconstructed the
projected mass distribution from B,V,R,I bands using two different methods
independently. The mass maps are remarkably similar, which confirm that the PSF
correction worked well. The ISAAC and FORS data were combined to measure the
photometric redshifts and constrain the redshift distribution of the lensed
sources. The total mass inferred from weak shear is 2.3 10^{14} h^{-1} Mo on
large scales, in agreement with the X-ray mass. The measured mass profile is
well fit by both Navarro, Frenk and White and isothermal sphere with core
radius models although the NFW is slightly better. In the inner regions, the
lensing mass is about 2 times higher than the X-ray mass, which supports the
view that complex physical processes in the innermost parts of clusters are
responsible for the X-ray/lensing mass discrepancy. The central part of the
cluster is composed of two mass peaks whose the center of mass is 15 arcsecond
north of the cD galaxy. This provides an explanation for the 15 arcsecond
offset between the cD and the center of the X-ray map reported elsewhere. The
optical, X-ray and the mass distributions show that MS 1008-1224 is composed of
many subsystems which are probably undergoing a merger. MS 1008-1224 shows a
remarkable case of cluster-cluster lensing. The photometric redshifts show an
excess of galaxies located 30 arcseconds south-west of the cD galaxy at a
redshift of about 0.9 which is lensed by MS 1008-1224. These results show the
importance of getting BVRIJK images silmultenously. The VLT is a unique tool to
provide such datasets.Comment: 22 pages, submitted to A&A, paper with `big' figures available at
ftp://ftp.cita.utoronto.ca/pub/waerbeke/ms1008paper
Enlighten-anything:When Segment Anything Model Meets Low-light Image Enhancement
Image restoration is a low-level visual task, and most CNN methods are
designed as black boxes, lacking transparency and intrinsic aesthetics. Many
unsupervised approaches ignore the degradation of visible information in
low-light scenes, which will seriously affect the aggregation of complementary
information and also make the fusion algorithm unable to produce satisfactory
fusion results under extreme conditions. In this paper, we propose
Enlighten-anything, which is able to enhance and fuse the semantic intent of
SAM segmentation with low-light images to obtain fused images with good visual
perception. The generalization ability of unsupervised learning is greatly
improved, and experiments on LOL dataset are conducted to show that our method
improves 3db in PSNR over baseline and 8 in SSIM. zero-shot learning of SAM
introduces a powerful aid for unsupervised low-light enhancement. The source
code of Rethink-Diffusion can be obtained from
https://github.com/zhangbaijin/enlighten-anythin
KinD-LCE Curve Estimation And Retinex Fusion On Low-Light Image
Low-light images often suffer from noise and color distortion. Object
detection, semantic segmentation, instance segmentation, and other tasks are
challenging when working with low-light images because of image noise and
chromatic aberration. We also found that the conventional Retinex theory loses
information in adjusting the image for low-light tasks. In response to the
aforementioned problem, this paper proposes an algorithm for low illumination
enhancement. The proposed method, KinD-LCE, uses a light curve estimation
module to enhance the illumination map in the Retinex decomposed image,
improving the overall image brightness. An illumination map and reflection map
fusion module were also proposed to restore the image details and reduce detail
loss. Additionally, a TV(total variation) loss function was applied to
eliminate noise. Our method was trained on the GladNet dataset, known for its
diverse collection of low-light images, tested against the Low-Light dataset,
and evaluated using the ExDark dataset for downstream tasks, demonstrating
competitive performance with a PSNR of 19.7216 and SSIM of 0.8213.Comment: Accepted by Signal, Image and Video Processin
- …