196 research outputs found
Fully Point-wise Convolutional Neural Network for Modeling Statistical Regularities in Natural Images
Modeling statistical regularity plays an essential role in ill-posed image
processing problems. Recently, deep learning based methods have been presented
to implicitly learn statistical representation of pixel distributions in
natural images and leverage it as a constraint to facilitate subsequent tasks,
such as color constancy and image dehazing. However, the existing CNN
architecture is prone to variability and diversity of pixel intensity within
and between local regions, which may result in inaccurate statistical
representation. To address this problem, this paper presents a novel fully
point-wise CNN architecture for modeling statistical regularities in natural
images. Specifically, we propose to randomly shuffle the pixels in the origin
images and leverage the shuffled image as input to make CNN more concerned with
the statistical properties. Moreover, since the pixels in the shuffled image
are independent identically distributed, we can replace all the large
convolution kernels in CNN with point-wise () convolution kernels while
maintaining the representation ability. Experimental results on two
applications: color constancy and image dehazing, demonstrate the superiority
of our proposed network over the existing architectures, i.e., using
1/101/100 network parameters and computational cost while achieving
comparable performance.Comment: 9 pages, 7 figures. To appear in ACM MM 201
Rich Feature Distillation with Feature Affinity Module for Efficient Image Dehazing
Single-image haze removal is a long-standing hurdle for computer vision
applications. Several works have been focused on transferring advances from
image classification, detection, and segmentation to the niche of image
dehazing, primarily focusing on contrastive learning and knowledge
distillation. However, these approaches prove computationally expensive,
raising concern regarding their applicability to on-the-edge use-cases. This
work introduces a simple, lightweight, and efficient framework for single-image
haze removal, exploiting rich "dark-knowledge" information from a lightweight
pre-trained super-resolution model via the notion of heterogeneous knowledge
distillation. We designed a feature affinity module to maximize the flow of
rich feature semantics from the super-resolution teacher to the student
dehazing network. In order to evaluate the efficacy of our proposed framework,
its performance as a plug-and-play setup to a baseline model is examined. Our
experiments are carried out on the RESIDE-Standard dataset to demonstrate the
robustness of our framework to the synthetic and real-world domains. The
extensive qualitative and quantitative results provided establish the
effectiveness of the framework, achieving gains of upto 15\% (PSNR) while
reducing the model size by 20 times.Comment: Preprint version. Accepted at Opti
All-in-one aerial image enhancement network for forest scenes
Drone monitoring plays an irreplaceable and significant role in forest firefighting due to its characteristics of wide-range observation and real-time messaging. However, aerial images are often susceptible to different degradation problems before performing high-level visual tasks including but not limited to smoke detection, fire classification, and regional localization. Recently, the majority of image enhancement methods are centered around particular types of degradation, necessitating the memory unit to accommodate different models for distinct scenarios in practical applications. Furthermore, such a paradigm requires wasted computational and storage resources to determine the type of degradation, making it difficult to meet the real-time and lightweight requirements of real-world scenarios. In this paper, we propose an All-in-one Image Enhancement Network (AIENet) that can restore various degraded images in one network. Specifically, we design a new multi-scale receptive field image enhancement block, which can better reconstruct high-resolution details of target regions of different sizes. In particular, this plug-and-play module enables it to be embedded in any learning-based model. And it has better flexibility and generalization in practical applications. This paper takes three challenging image enhancement tasks encountered in drone monitoring as examples, whereby we conduct task-specific and all-in-one image enhancement experiments on a synthetic forest dataset. The results show that the proposed AIENet outperforms the state-of-the-art image enhancement algorithms quantitatively and qualitatively. Furthermore, extra experiments on high-level vision detection also show the promising performance of our method compared with some recent baselines.Award-winningPostprint (published version
Mutual Information-driven Triple Interaction Network for Efficient Image Dehazing
Multi-stage architectures have exhibited efficacy in image dehazing, which
usually decomposes a challenging task into multiple more tractable sub-tasks
and progressively estimates latent hazy-free images. Despite the remarkable
progress, existing methods still suffer from the following shortcomings: (1)
limited exploration of frequency domain information; (2) insufficient
information interaction; (3) severe feature redundancy. To remedy these issues,
we propose a novel Mutual Information-driven Triple interaction Network
(MITNet) based on spatial-frequency dual domain information and two-stage
architecture. To be specific, the first stage, named amplitude-guided haze
removal, aims to recover the amplitude spectrum of the hazy images for haze
removal. And the second stage, named phase-guided structure refined, devotes to
learning the transformation and refinement of the phase spectrum. To facilitate
the information exchange between two stages, an Adaptive Triple Interaction
Module (ATIM) is developed to simultaneously aggregate cross-domain,
cross-scale, and cross-stage features, where the fused features are further
used to generate content-adaptive dynamic filters so that applying them to
enhance global context representation. In addition, we impose the mutual
information minimization constraint on paired scale encoder and decoder
features from both stages. Such an operation can effectively reduce information
redundancy and enhance cross-stage feature complementarity. Extensive
experiments on multiple public datasets exhibit that our MITNet performs
superior performance with lower model complexity.The code and models are
available at https://github.com/it-hao/MITNet.Comment: Accepted in ACM MM 202
- …