1,758 research outputs found
A Non-Reference Evaluation of Underwater Image Enhancement Methods Using a New Underwater Image Dataset
The rise of vision-based environmental, marine, and oceanic exploration research highlights the need for supporting underwater image enhancement techniques to help mitigate water effects on images such as blurriness, low color contrast, and poor quality. This paper presents an evaluation of common underwater image enhancement techniques using a new underwater image dataset. The collected dataset is comprised of 100 images of aquatic plants taken at a shallow depth of up to three meters from three different locations in the Great Lake Superior, USA, via a Remotely Operated Vehicle (ROV) equipped with a high-definition RGB camera. In particular, we use our dataset to benchmark nine state-of-the-art image enhancement models at three different depths using a set of common non-reference image quality evaluation metrics. Then we provide a comparative analysis of the performance of the selected models at different depths and highlight the most prevalent ones. The obtained results show that the selected image enhancement models are capable of producing considerably better-quality images with some models performing better than others at certain depths
Semantic-aware Texture-Structure Feature Collaboration for Underwater Image Enhancement
Underwater image enhancement has become an attractive topic as a significant
technology in marine engineering and aquatic robotics. However, the limited
number of datasets and imperfect hand-crafted ground truth weaken its
robustness to unseen scenarios, and hamper the application to high-level vision
tasks. To address the above limitations, we develop an efficient and compact
enhancement network in collaboration with a high-level semantic-aware
pretrained model, aiming to exploit its hierarchical feature representation as
an auxiliary for the low-level underwater image enhancement. Specifically, we
tend to characterize the shallow layer features as textures while the deep
layer features as structures in the semantic-aware model, and propose a
multi-path Contextual Feature Refinement Module (CFRM) to refine features in
multiple scales and model the correlation between different features. In
addition, a feature dominative network is devised to perform channel-wise
modulation on the aggregated texture and structure features for the adaptation
to different feature patterns of the enhancement network. Extensive experiments
on benchmarks demonstrate that the proposed algorithm achieves more appealing
results and outperforms state-of-the-art methods by large margins. We also
apply the proposed algorithm to the underwater salient object detection task to
reveal the favorable semantic-aware ability for high-level vision tasks. The
code is available at STSC.Comment: Accepted by ICRA202
Dual Adversarial Resilience for Collaborating Robust Underwater Image Enhancement and Perception
Due to the uneven scattering and absorption of different light wavelengths in
aquatic environments, underwater images suffer from low visibility and clear
color deviations. With the advancement of autonomous underwater vehicles,
extensive research has been conducted on learning-based underwater enhancement
algorithms. These works can generate visually pleasing enhanced images and
mitigate the adverse effects of degraded images on subsequent perception tasks.
However, learning-based methods are susceptible to the inherent fragility of
adversarial attacks, causing significant disruption in results. In this work,
we introduce a collaborative adversarial resilience network, dubbed CARNet, for
underwater image enhancement and subsequent detection tasks. Concretely, we
first introduce an invertible network with strong perturbation-perceptual
abilities to isolate attacks from underwater images, preventing interference
with image enhancement and perceptual tasks. Furthermore, we propose a
synchronized attack training strategy with both visual-driven and
perception-driven attacks enabling the network to discern and remove various
types of attacks. Additionally, we incorporate an attack pattern discriminator
to heighten the robustness of the network against different attacks. Extensive
experiments demonstrate that the proposed method outputs visually appealing
enhancement images and perform averagely 6.71% higher detection mAP than
state-of-the-art methods.Comment: 9 pages, 9 figure
SGUIE-Net: Semantic Attention Guided Underwater Image Enhancement with Multi-Scale Perception
Due to the wavelength-dependent light attenuation, refraction and scattering,
underwater images usually suffer from color distortion and blurred details.
However, due to the limited number of paired underwater images with undistorted
images as reference, training deep enhancement models for diverse degradation
types is quite difficult. To boost the performance of data-driven approaches,
it is essential to establish more effective learning mechanisms that mine
richer supervised information from limited training sample resources. In this
paper, we propose a novel underwater image enhancement network, called
SGUIE-Net, in which we introduce semantic information as high-level guidance
across different images that share common semantic regions. Accordingly, we
propose semantic region-wise enhancement module to perceive the degradation of
different semantic regions from multiple scales and feed it back to the global
attention features extracted from its original scale. This strategy helps to
achieve robust and visually pleasant enhancements to different semantic
objects, which should thanks to the guidance of semantic information for
differentiated enhancement. More importantly, for those degradation types that
are not common in the training sample distribution, the guidance connects them
with the already well-learned types according to their semantic relevance.
Extensive experiments on the publicly available datasets and our proposed
dataset demonstrated the impressive performance of SGUIE-Net. The code and
proposed dataset are available at: https://trentqq.github.io/SGUIE-Net.htm
- …