16 research outputs found
Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations
Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing
Mutual Information-driven Triple Interaction Network for Efficient Image Dehazing
Multi-stage architectures have exhibited efficacy in image dehazing, which
usually decomposes a challenging task into multiple more tractable sub-tasks
and progressively estimates latent hazy-free images. Despite the remarkable
progress, existing methods still suffer from the following shortcomings: (1)
limited exploration of frequency domain information; (2) insufficient
information interaction; (3) severe feature redundancy. To remedy these issues,
we propose a novel Mutual Information-driven Triple interaction Network
(MITNet) based on spatial-frequency dual domain information and two-stage
architecture. To be specific, the first stage, named amplitude-guided haze
removal, aims to recover the amplitude spectrum of the hazy images for haze
removal. And the second stage, named phase-guided structure refined, devotes to
learning the transformation and refinement of the phase spectrum. To facilitate
the information exchange between two stages, an Adaptive Triple Interaction
Module (ATIM) is developed to simultaneously aggregate cross-domain,
cross-scale, and cross-stage features, where the fused features are further
used to generate content-adaptive dynamic filters so that applying them to
enhance global context representation. In addition, we impose the mutual
information minimization constraint on paired scale encoder and decoder
features from both stages. Such an operation can effectively reduce information
redundancy and enhance cross-stage feature complementarity. Extensive
experiments on multiple public datasets exhibit that our MITNet performs
superior performance with lower model complexity.The code and models are
available at https://github.com/it-hao/MITNet.Comment: Accepted in ACM MM 202
DEEP LEARNING FOR IMAGE RESTORATION AND ROBOTIC VISION
Traditional model-based approach requires the formulation of mathematical model, and the model often has limited performance. The quality of an image may degrade due to a variety of reasons: It could be the context of scene is affected by weather conditions such as haze, rain, and snow; It\u27s also possible that there is some noise generated during image processing/transmission (e.g., artifacts generated during compression.). The goal of image restoration is to restore the image back to desirable quality both subjectively and objectively. Agricultural robotics is gaining interest these days since most agricultural works are lengthy and repetitive. Computer vision is crucial to robots especially the autonomous ones. However, it is challenging to have a precise mathematical model to describe the aforementioned problems. Compared with traditional approach, learning-based approach has an edge since it does not require any model to describe the problem. Moreover, learning-based approach now has the best-in-class performance on most of the vision problems such as image dehazing, super-resolution, and image recognition.
In this dissertation, we address the problem of image restoration and robotic vision with deep learning. These two problems are highly related with each other from a unique network architecture perspective: It is essential to select appropriate networks when dealing with different problems. Specifically, we solve the problems of single image dehazing, High Efficiency Video Coding (HEVC) loop filtering and super-resolution, and computer vision for an autonomous robot. Our technical contributions are threefold: First, we propose to reformulate haze as a signal-dependent noise which allows us to uncover it by learning a structural residual. Based on our novel reformulation, we solve dehazing with recursive deep residual network and generative adversarial network which emphasizes on objective and perceptual quality, respectively. Second, we replace traditional filters in HEVC with a Convolutional Neural Network (CNN) filter. We show that our CNN filter could achieve 7% BD-rate saving when compared with traditional filters such as bilateral and deblocking filter. We also propose to incorporate a multi-scale CNN super-resolution module into HEVC. Such post-processing module could improve visual quality under extremely low bandwidth. Third, a transfer learning technique is implemented to support vision and autonomous decision making of a precision pollination robot. Good experimental results are reported with real-world data
Vision Sensors and Edge Detection
Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing
Intelligent Transportation Related Complex Systems and Sensors
Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITS) are being widely adopted worldwide to improve the efficiency and safety of the transportation system. They enable users to be better informed and make safer, more coordinated, and smarter decisions on the use of transport networks. Current ITSs are complex systems, made up of several components/sub-systems characterized by time-dependent interactions among themselves. Some examples of these transportation-related complex systems include: road traffic sensors, autonomous/automated cars, smart cities, smart sensors, virtual sensors, traffic control systems, smart roads, logistics systems, smart mobility systems, and many others that are emerging from niche areas. The efficient operation of these complex systems requires: i) efficient solutions to the issues of sensors/actuators used to capture and control the physical parameters of these systems, as well as the quality of data collected from these systems; ii) tackling complexities using simulations and analytical modelling techniques; and iii) applying optimization techniques to improve the performance of these systems. It includes twenty-four papers, which cover scientific concepts, frameworks, architectures and various other ideas on analytics, trends and applications of transportation-related data
Sea surface wind and wave parameter estimation from X-band marine radar images with rain detection and mitigation
In this research, the application of X-band marine radar backscatter images for sea surface
wind and wave parameter estimation with rain detection and mitigation is investigated.
In the presence of rain, the rain echoes in the radar image blur the wave signatures
and negatively affect estimation accuracy. Hence, in order to improve estimation accuracy,
it is meaningful to detect the presence of those rain echoes and mitigate their influence on
estimation results. Since rain alters radar backscatter intensity distribution, features are extracted
from the normalized histogram of each radar image. Then, a support vector machine
(SVM)-based rain detection model is proposed to classify radar images obtained between
rainless and rainy conditions. The classification accuracy shows significant improvement
compared to the existing threshold-based method. By further observing images obtained
under rainy conditions, it is found that many of them are only partially contaminated by rain
echoes. Therefore, in order to segment between rain-contaminated regions and those that
are less or unaffected by rain, two types of methods are developed based on unsupervised
learning techniques and convolutional neural network (CNN), respectively. Specifically, for
the unsupervised learning-based method, texture features are first extracted from each pixel
and then trained using a self organizing map (SOM)-based clustering model, which is able
to conduct pixel-based identification of rain-contaminated regions. As for the CNN-based
method, a SegNet-based semantic segmentation CNN is �rst designed and then trained using
images with manually annotated labels. Both shipborne and shore-based marine radar
data are used to train and validate the proposed methods and high classification accuracies
of around 90% are obtained.
Due to the similarities between how haze affects terrestrial images and how rain affects
marine radar images, a type of CNN for image dehazing purposes, i.e., DehazeNet, is
applied to rain-contaminated regions in radar images for correcting the in
uence of rain,
which reduces the estimation error of wind direction significantly. Besides, after extracting
histogram and texture features from rain-corrected radar images, a support vector regression
(SVR)-based model, which achieves high estimation accuracy, is trained for wind speed
estimation. Finally, a convolutional gated recurrent unit (CGRU) network is designed and
trained for significant wave height (SWH) estimation. As an end-to-end system, the proposed
network is able to generate estimation results directly from radar image sequences
by extracting multi-scale spatial and temporal features in radar image sequences automatically.
Compared to the classic signal-to-noise (SNR)-based method, the CGRU-based model
shows significant improvement in both estimation accuracy (under both rainless and rainy
conditions) and computational efficiency