159 research outputs found
An Efficient Edge Detection Technique for Hazy Images using DCP
Images of outdoor areas are typically degraded in quality by its turbid medium in the nature such as haze, fog and smoke. The absorption and scattering of light on such kind of images effects the quality of the image. The degraded images will loss the contrast and color artifacts from the original image. Edge detection is another challenging issue on such kinds of degraded images. There are several research works are under progress to reduce the haze exists in the image. Although haze removal techniques will reduce the haze present in the image, the results of those techniques were dropped the natural look of the original image as penalty. We proposed an effective way of finding the edges from the hazy images. Firstly, a dark channel prior method is used to eliminate the unwanted haze from the original image. The statistics shows that this method effectively works for the images taken in an outdoor hazy environment. The key observation of this method is that at least one color channel is having a minimum intensity value in a local patch. The results shows that results of this method have a good results compared to other contrast improvement techniques. Secondly we have applied the Sobel edge detection operator to find the edges of the resultant image
Image dehazing based on partitioning reconstruction and entropy-based alternating fast-weighted guided filters
A robust image dehazing algorithm based on the first-order scattering of the image degradation model is proposed. In this work, there are three contributions toward image dehazing: (i) a robust method for assessing the global irradiance from the most hazy-opaque regions of the imagery is proposed; (ii) more detailed depth information of the scene can be recovered through the enhancement of the transmission map using scene partitions and entropy-based alternating fast-weighted guided filters; and (iii) crucial model parameters are extracted from in-scene information. This paper briefly outlines the principle of the proposed technique and compares the dehazed results with four other dehazing algorithms using a variety of different types of imageries. The dehazed images have been assessed through a quality figure-of-merit, and experiments have shown that the proposed algorithm effectively removes haze and has achieved a much better quality of dehazed images than all other state-of-the-art dehazing methods employed in this work
Learned Perceptual Image Enhancement
Learning a typical image enhancement pipeline involves minimization of a loss
function between enhanced and reference images. While L1 and L2 losses are
perhaps the most widely used functions for this purpose, they do not
necessarily lead to perceptually compelling results. In this paper, we show
that adding a learned no-reference image quality metric to the loss can
significantly improve enhancement operators. This metric is implemented using a
CNN (convolutional neural network) trained on a large-scale dataset labelled
with aesthetic preferences of human raters. This loss allows us to conveniently
perform back-propagation in our learning framework to simultaneously optimize
for similarity to a given ground truth reference and perceptual quality. This
perceptual loss is only used to train parameters of image processing operators,
and does not impose any extra complexity at inference time. Our experiments
demonstrate that this loss can be effective for tuning a variety of operators
such as local tone mapping and dehazing
Detail Preserving Low Illumination Image and Video Enhancement Algorithm Based on Dark Channel Prior
In low illumination situations, insufficient light in the monitoring device results in poor visibility of effective information, which cannot meet practical applications. To overcome the above problems, a detail preserving low illumination video image enhancement algorithm based on dark channel prior is proposed in this paper. First, a dark channel refinement method is proposed, which is defined by imposing a structure prior to the initial dark channel to improve the image brightness. Second, an anisotropic guided filter (AnisGF) is used to refine the transmission, which preserves the edges of the image. Finally, a detail enhancement algorithm is proposed to avoid the problem of insufficient detail in the initial enhancement image. To avoid video flicker, the next video frames are enhanced based on the brightness of the first enhanced frame. Qualitative and quantitative analysis shows that the proposed algorithm is superior to the contrast algorithm, in which the proposed algorithm ranks first in average gradient, edge intensity, contrast, and patch-based contrast quality index. It can be effectively applied to the enhancement of surveillance video images and for wider computer vision applications
Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution
Visibility in hazy nighttime scenes is frequently reduced by multiple
factors, including low light, intense glow, light scattering, and the presence
of multicolored light sources. Existing nighttime dehazing methods often
struggle with handling glow or low-light conditions, resulting in either
excessively dark visuals or unsuppressed glow outputs. In this paper, we
enhance the visibility from a single nighttime haze image by suppressing glow
and enhancing low-light regions. To handle glow effects, our framework learns
from the rendered glow pairs. Specifically, a light source aware network is
proposed to detect light sources of night images, followed by the APSF (Angular
Point Spread Function)-guided glow rendering. Our framework is then trained on
the rendered images, resulting in glow suppression. Moreover, we utilize
gradient-adaptive convolution, to capture edges and textures in hazy scenes. By
leveraging extracted edges and textures, we enhance the contrast of the scene
without losing important structural details. To boost low-light intensity, our
network learns an attention map, then adjusted by gamma correction. This
attention has high values on low-light regions and low values on haze and glow
regions. Extensive evaluation on real nighttime haze images, demonstrates the
effectiveness of our method. Our experiments demonstrate that our method
achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13 on
GTA5 nighttime haze dataset. Our data and code is available at:
\url{https://github.com/jinyeying/nighttime_dehaze}.Comment: Accepted to ACM'MM2023, https://github.com/jinyeying/nighttime_dehaz
- …