12 research outputs found
Non-Homogeneous Haze Removal via Artificial Scene Prior and Bidimensional Graph Reasoning
Due to the lack of natural scene and haze prior information, it is greatly
challenging to completely remove the haze from single image without distorting
its visual content. Fortunately, the real-world haze usually presents
non-homogeneous distribution, which provides us with many valuable clues in
partial well-preserved regions. In this paper, we propose a Non-Homogeneous
Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph
reasoning. Firstly, we employ the gamma correction iteratively to simulate
artificial multiple shots under different exposure conditions, whose haze
degrees are different and enrich the underlying scene prior. Secondly, beyond
utilizing the local neighboring relationship, we build a bidimensional graph
reasoning module to conduct non-local filtering in the spatial and channel
dimensions of feature maps, which models their long-range dependency and
propagates the natural scene prior between the well-preserved nodes and the
nodes contaminated by haze. We evaluate our method on different benchmark
datasets. The results demonstrate that our method achieves superior performance
over many state-of-the-art algorithms for both the single image dehazing and
hazy image understanding tasks
A Fast-Dehazing Technique using Generative Adversarial Network model for Illumination Adjustment in Hazy Videos
Haze significantly lowers the quality of the photos and videos that are taken. This might potentially be dangerous in addition to having an impact on the monitoring equipment' dependability. Recent years have seen an increase in issues brought on by foggy settings, necessitating the development of real-time dehazing techniques. Intelligent vision systems, such as surveillance and monitoring systems, rely fundamentally on the characteristics of the input pictures having a significant impact on the accuracy of the object detection. This paper presents a fast video dehazing technique using Generative Adversarial Network (GAN) model. The haze in the input video is estimated using depth in the scene extracted using a pre trained monocular depth ResNet model. Based on the amount of haze, an appropriate model is selected which is trained for specific haze conditions. The novelty of the proposed work is that the generator model is kept simple to get faster results in real-time. The discriminator is kept complex to make the generator more efficient. The traditional loss function is replaced with Visual Geometry Group (VGG) feature loss for better dehazing. The proposed model produced better results when compared to existing models. The Peak Signal to Noise Ratio (PSNR) obtained for most of the frames is above 32. The execution time is less than 60 milli seconds which makes the proposed model suited for video dehazing