13,782 research outputs found
Is Underwater Image Enhancement All Object Detectors Need?
Underwater object detection is a crucial and challenging problem in marine
engineering and aquatic robot. The difficulty is partly because of the
degradation of underwater images caused by light selective absorption and
scattering. Intuitively, enhancing underwater images can benefit high-level
applications like underwater object detection. However, it is still unclear
whether all object detectors need underwater image enhancement as
pre-processing. We therefore pose the questions "Does underwater image
enhancement really improve underwater object detection?" and "How does
underwater image enhancement contribute to underwater object detection?". With
these two questions, we conduct extensive studies. Specifically, we use 18
state-of-the-art underwater image enhancement algorithms, covering traditional,
CNN-based, and GAN-based algorithms, to pre-process underwater object detection
data. Then, we retrain 7 popular deep learning-based object detectors using the
corresponding results enhanced by different algorithms, obtaining 126
underwater object detection models. Coupled with 7 object detection models
retrained using raw underwater images, we employ these 133 models to
comprehensively analyze the effect of underwater image enhancement on
underwater object detection. We expect this study can provide sufficient
exploration to answer the aforementioned questions and draw more attention of
the community to the joint problem of underwater image enhancement and
underwater object detection. The pre-trained models and results are publicly
available and will be regularly updated. Project page:
https://github.com/BIGWangYuDong/lqit/tree/main/configs/detection/uw_enhancement_affect_detection.Comment: 17 pages, 9 figure
Joint Perceptual Learning for Enhancement and Object Detection in Underwater Scenarios
Underwater degraded images greatly challenge existing algorithms to detect
objects of interest. Recently, researchers attempt to adopt attention
mechanisms or composite connections for improving the feature representation of
detectors. However, this solution does \textit{not} eliminate the impact of
degradation on image content such as color and texture, achieving minimal
improvements. Another feasible solution for underwater object detection is to
develop sophisticated deep architectures in order to enhance image quality or
features. Nevertheless, the visually appealing output of these enhancement
modules do \textit{not} necessarily generate high accuracy for deep detectors.
More recently, some multi-task learning methods jointly learn underwater
detection and image enhancement, accessing promising improvements. Typically,
these methods invoke huge architecture and expensive computations, rendering
inefficient inference. Definitely, underwater object detection and image
enhancement are two interrelated tasks. Leveraging information coming from the
two tasks can benefit each task. Based on these factual opinions, we propose a
bilevel optimization formulation for jointly learning underwater object
detection and image enhancement, and then unroll to a dual perception network
(DPNet) for the two tasks. DPNet with one shared module and two task subnets
learns from the two different tasks, seeking a shared representation. The
shared representation provides more structural details for image enhancement
and rich content information for object detection. Finally, we derive a
cooperative training strategy to optimize parameters for DPNet. Extensive
experiments on real-world and synthetic underwater datasets demonstrate that
our method outputs visually favoring images and higher detection accuracy
Perceptual underwater image enhancement with deep learning and physical priors
Underwater image enhancement, as a pre-processing step to support the following object detection task, has drawn considerable attention in the field of underwater navigation and ocean exploration. However, most of the existing underwater image enhancement strategies tend to consider enhancement and detection as two fully independent modules with no interaction, and the practice of separate optimisation does not always help the following object detection task. In this paper, we propose two perceptual enhancement models, each of which uses a deep enhancement model with a detection perceptor. The detection perceptor provides feedback information in the form of gradients to guide the enhancement model to generate patch level visually pleasing or detection favourable images. In addition, due to the lack of training data, a hybrid underwater image synthesis model, which fuses physical priors and data-driven cues, is proposed to synthesise training data and generalise our enhancement model for real-world underwater images. Experimental results show the superiority of our proposed method over several state-of-the-art methods on both real-world and synthetic underwater datasets
Quality Enhancement for Underwater Images using Various Image Processing Techniques: A Survey
Underwater images are essential to identify the activity of underwater objects. It played a vital role to explore and utilizing aquatic resources. The underwater images have features such as low contrast, different noises, and object imbalance due to lack of light intensity. CNN-based in-deep learning approaches have improved underwater low-resolution photos during the last decade. Nevertheless, still, those techniques have some problems, such as high MSE, PSNT and high SSIM error rate. They solve the problem using different experimental analyses; various methods are studied that effectively treat different underwater image distorted scenes and improve contrast and color deviation compared to other algorithms. In terms of the color richness of the resulting images and the execution time, there are still deficiencies with the latest algorithm. In future work, the structure of our algorithm will be further adjusted to shorten the execution time, and optimization of the color compensation method under different color deviations will also be the focus of future research. With the wide application of underwater vision in different scientific research fields, underwater image enhancement can play an increasingly significant role in the process of image processing in underwater research and underwater archaeology. Most of the target images of the current algorithms are shallow water images. When the artificial light source is added to deep water images, the raw images will face more diverse noises, and image enhancement will face more challenges. As a result, this study investigates the numerous existing systems used for quality enhancement of underwater mages using various image processing techniques. We find various gaps and challenges of current systems and build the enhancement of this research for future improvement. Aa a result of this overview is to define the future problem statement to enhance this research and overcome the challenges faced by previous researchers. On other hand also improve the accuracy in terms of reducing MSE and enhancing PSNR etc
Physics-Aware Semi-Supervised Underwater Image Enhancement
Underwater images normally suffer from degradation due to the transmission
medium of water bodies. Both traditional prior-based approaches and deep
learning-based methods have been used to address this problem. However, the
inflexible assumption of the former often impairs their effectiveness in
handling diverse underwater scenes, while the generalization of the latter to
unseen images is usually weakened by insufficient data. In this study, we
leverage both the physics-based underwater Image Formation Model (IFM) and deep
learning techniques for Underwater Image Enhancement (UIE). To this end, we
propose a novel Physics-Aware Dual-Stream Underwater Image Enhancement Network,
i.e., PA-UIENet, which comprises a Transmission Estimation Steam (T-Stream) and
an Ambient Light Estimation Stream (A-Stream). This network fulfills the UIE
task by explicitly estimating the degradation parameters of the IFM. We also
adopt an IFM-inspired semi-supervised learning framework, which exploits both
the labeled and unlabeled images, to address the issue of insufficient data.
Our method performs better than, or at least comparably to, eight baselines
across five testing sets in the degradation estimation and UIE tasks. This
should be due to the fact that it not only can model the degradation but also
can learn the characteristics of diverse underwater scenes.Comment: 12 pages, 5 figure
Wavelet-based Fourier Information Interaction with Frequency Diffusion Adjustment for Underwater Image Restoration
Underwater images are subject to intricate and diverse degradation,
inevitably affecting the effectiveness of underwater visual tasks. However,
most approaches primarily operate in the raw pixel space of images, which
limits the exploration of the frequency characteristics of underwater images,
leading to an inadequate utilization of deep models' representational
capabilities in producing high-quality images. In this paper, we introduce a
novel Underwater Image Enhancement (UIE) framework, named WF-Diff, designed to
fully leverage the characteristics of frequency domain information and
diffusion models. WF-Diff consists of two detachable networks: Wavelet-based
Fourier information interaction network (WFI2-net) and Frequency Residual
Diffusion Adjustment Module (FRDAM). With our full exploration of the frequency
domain information, WFI2-net aims to achieve preliminary enhancement of
frequency information in the wavelet space. Our proposed FRDAM can further
refine the high- and low-frequency information of the initial enhanced images,
which can be viewed as a plug-and-play universal module to adjust the detail of
the underwater images. With the above techniques, our algorithm can show SOTA
performance on real-world underwater image datasets, and achieves competitive
performance in visual quality
UWFormer: Underwater Image Enhancement via a Semi-Supervised Multi-Scale Transformer
Underwater images often exhibit poor quality, imbalanced coloration, and low
contrast due to the complex and intricate interaction of light, water, and
objects. Despite the significant contributions of previous underwater
enhancement techniques, there exist several problems that demand further
improvement: (i) Current deep learning methodologies depend on Convolutional
Neural Networks (CNNs) that lack multi-scale enhancement and also have limited
global perception fields. (ii) The scarcity of paired real-world underwater
datasets poses a considerable challenge, and the utilization of synthetic image
pairs risks overfitting. To address the aforementioned issues, this paper
presents a Multi-scale Transformer-based Network called UWFormer for enhancing
images at multiple frequencies via semi-supervised learning, in which we
propose a Nonlinear Frequency-aware Attention mechanism and a Multi-Scale
Fusion Feed-forward Network for low-frequency enhancement. Additionally, we
introduce a specialized underwater semi-supervised training strategy, proposing
a Subaqueous Perceptual Loss function to generate reliable pseudo labels.
Experiments using full-reference and non-reference underwater benchmarks
demonstrate that our method outperforms state-of-the-art methods in terms of
both quantity and visual quality
- …