136 research outputs found
A Gated Cross-domain Collaborative Network for Underwater Object Detection
Underwater object detection (UOD) plays a significant role in aquaculture and
marine environmental protection. Considering the challenges posed by low
contrast and low-light conditions in underwater environments, several
underwater image enhancement (UIE) methods have been proposed to improve the
quality of underwater images. However, only using the enhanced images does not
improve the performance of UOD, since it may unavoidably remove or alter
critical patterns and details of underwater objects. In contrast, we believe
that exploring the complementary information from the two domains is beneficial
for UOD. The raw image preserves the natural characteristics of the scene and
texture information of the objects, while the enhanced image improves the
visibility of underwater objects. Based on this perspective, we propose a Gated
Cross-domain Collaborative Network (GCC-Net) to address the challenges of poor
visibility and low contrast in underwater environments, which comprises three
dedicated components. Firstly, a real-time UIE method is employed to generate
enhanced images, which can improve the visibility of objects in low-contrast
areas. Secondly, a cross-domain feature interaction module is introduced to
facilitate the interaction and mine complementary information between raw and
enhanced image features. Thirdly, to prevent the contamination of unreliable
generated results, a gated feature fusion module is proposed to adaptively
control the fusion ratio of cross-domain information. Our method presents a new
UOD paradigm from the perspective of cross-domain information interaction and
fusion. Experimental results demonstrate that the proposed GCC-Net achieves
state-of-the-art performance on four underwater datasets
Learnable Differencing Center for Nighttime Depth Perception
Depth completion is the task of recovering dense depth maps from sparse ones,
usually with the help of color images. Existing image-guided methods perform
well on daytime depth perception self-driving benchmarks, but struggle in
nighttime scenarios with poor visibility and complex illumination. To address
these challenges, we propose a simple yet effective framework called LDCNet.
Our key idea is to use Recurrent Inter-Convolution Differencing (RICD) and
Illumination-Affinitive Intra-Convolution Differencing (IAICD) to enhance the
nighttime color images and reduce the negative effects of the varying
illumination, respectively. RICD explicitly estimates global illumination by
differencing two convolutions with different kernels, treating the
small-kernel-convolution feature as the center of the large-kernel-convolution
feature in a new perspective. IAICD softly alleviates local relative light
intensity by differencing a single convolution, where the center is dynamically
aggregated based on neighboring pixels and the estimated illumination map in
RICD. On both nighttime depth completion and depth estimation tasks, extensive
experiments demonstrate the effectiveness of our LDCNet, reaching the state of
the art.Comment: 10 page
Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations
Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing
- …