6,254 research outputs found
Dense Scattering Layer Removal
We propose a new model, together with advanced optimization, to separate a
thick scattering media layer from a single natural image. It is able to handle
challenging underwater scenes and images taken in fog and sandstorm, both of
which are with significantly reduced visibility. Our method addresses the
critical issue -- this is, originally unnoticeable impurities will be greatly
magnified after removing the scattering media layer -- with transmission-aware
optimization. We introduce non-local structure-aware regularization to properly
constrain transmission estimation without introducing the halo artifacts. A
selective-neighbor criterion is presented to convert the unconventional
constrained optimization problem to an unconstrained one where the latter can
be efficiently solved.Comment: 10 pages, 10 figures, Siggraph Asia 2013 Technial Brief
Underwater Single Image Color Restoration Using Haze-Lines and a New Quantitative Dataset
Underwater images suffer from color distortion and low contrast, because
light is attenuated while it propagates through water. Attenuation under water
varies with wavelength, unlike terrestrial images where attenuation is assumed
to be spectrally uniform. The attenuation depends both on the water body and
the 3D structure of the scene, making color restoration difficult.
Unlike existing single underwater image enhancement techniques, our method
takes into account multiple spectral profiles of different water types. By
estimating just two additional global parameters: the attenuation ratios of the
blue-red and blue-green color channels, the problem is reduced to single image
dehazing, where all color channels have the same attenuation coefficients.
Since the water type is unknown, we evaluate different parameters out of an
existing library of water types. Each type leads to a different restored image
and the best result is automatically chosen based on color distribution.
We collected a dataset of images taken in different locations with varying
water properties, showing color charts in the scenes. Moreover, to obtain
ground truth, the 3D structure of the scene was calculated based on stereo
imaging. This dataset enables a quantitative evaluation of restoration
algorithms on natural images and shows the advantage of our method
Unsupervised Single Image Underwater Depth Estimation
Depth estimation from a single underwater image is one of the most
challenging problems and is highly ill-posed. Due to the absence of large
generalized underwater depth datasets and the difficulty in obtaining ground
truth depth-maps, supervised learning techniques such as direct depth
regression cannot be used. In this paper, we propose an unsupervised method for
depth estimation from a single underwater image taken `in the wild' by using
haze as a cue for depth. Our approach is based on indirect depth-map estimation
where we learn the mapping functions between unpaired RGB-D terrestrial images
and arbitrary underwater images to estimate the required depth-map. We propose
a method which is based on the principles of cycle-consistent learning and uses
dense-block based auto-encoders as generator networks. We evaluate and compare
our method both quantitatively and qualitatively on various underwater images
with diverse attenuation and scattering conditions and show that our method
produces state-of-the-art results for unsupervised depth estimation from a
single underwater image.Comment: Accepted for publication at IEEE International Conference on Image
Processing (ICIP), 201
Single Image Restoration for Participating Media Based on Prior Fusion
This paper describes a method to restore degraded images captured in a
participating media -- fog, turbid water, sand storm, etc. Differently from the
related work that only deal with a medium, we obtain generality by using an
image formation model and a fusion of new image priors. The model considers the
image color variation produced by the medium. The proposed restoration method
is based on the fusion of these priors and supported by statistics collected on
images acquired in both non-participating and participating media. The key of
the method is to fuse two complementary measures --- local contrast and color
data. The obtained results on underwater and foggy images demonstrate the
capabilities of the proposed method. Moreover, we evaluated our method using a
special dataset for which a ground-truth image is available.Comment: This paper is under consideration at Pattern Recognition Letter
Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions
Underwater image enhancement is such an important low-level vision task with
many applications that numerous algorithms have been proposed in recent years.
These algorithms developed upon various assumptions demonstrate successes from
various aspects using different data sets and different metrics. In this work,
we setup an undersea image capturing system, and construct a large-scale
Real-world Underwater Image Enhancement (RUIE) data set divided into three
subsets. The three subsets target at three challenging aspects for enhancement,
i.e., image visibility quality, color casts, and higher-level
detection/classification, respectively. We conduct extensive and systematic
experiments on RUIE to evaluate the effectiveness and limitations of various
algorithms to enhance visibility and correct color casts on images with
hierarchical categories of degradation. Moreover, underwater image enhancement
in practice usually serves as a preprocessing step for mid-level and high-level
vision tasks. We thus exploit the object detection performance on enhanced
images as a brand new task-specific evaluation criterion. The findings from
these evaluations not only confirm what is commonly believed, but also suggest
promising solutions and new directions for visibility enhancement, color
correction, and object detection on real-world underwater images.Comment: arXiv admin note: text overlap with arXiv:1712.04143 by other author
Visual-Quality-Driven Learning for Underwater Vision Enhancement
The image processing community has witnessed remarkable advances in enhancing
and restoring images. Nevertheless, restoring the visual quality of underwater
images remains a great challenge. End-to-end frameworks might fail to enhance
the visual quality of underwater images since in several scenarios it is not
feasible to provide the ground truth of the scene radiance. In this work, we
propose a CNN-based approach that does not require ground truth data since it
uses a set of image quality metrics to guide the restoration learning process.
The experiments showed that our method improved the visual quality of
underwater images preserving their edges and also performed well considering
the UCIQE metric.Comment: Accepted for publication and presented in 2018 IEEE International
Conference on Image Processing (ICIP
Single Image Dehazing through Improved Atmospheric Light Estimation
Image contrast enhancement for outdoor vision is important for smart car
auxiliary transport systems. The video frames captured in poor weather
conditions are often characterized by poor visibility. Most image dehazing
algorithms consider to use a hard threshold assumptions or user input to
estimate atmospheric light. However, the brightest pixels sometimes are objects
such as car lights or streetlights, especially for smart car auxiliary
transport systems. Simply using a hard threshold may cause a wrong estimation.
In this paper, we propose a single optimized image dehazing method that
estimates atmospheric light efficiently and removes haze through the estimation
of a semi-globally adaptive filter. The enhanced images are characterized with
little noise and good exposure in dark regions. The textures and edges of the
processed images are also enhanced significantly.Comment: Multimedia Tools and Applications (2015
Towards Real-Time Advancement of Underwater Visual Quality with GAN
Low visual quality has prevented underwater robotic vision from a wide range
of applications. Although several algorithms have been developed, real-time and
adaptive methods are deficient for real-world tasks. In this paper, we address
this difficulty based on generative adversarial networks (GAN), and propose a
GAN-based restoration scheme (GAN-RS). In particular, we develop a multi-branch
discriminator including an adversarial branch and a critic branch for the
purpose of simultaneously preserving image content and removing underwater
noise. In addition to adversarial learning, a novel dark channel prior loss
also promotes the generator to produce realistic vision. More specifically, an
underwater index is investigated to describe underwater properties, and a loss
function based on the underwater index is designed to train the critic branch
for underwater noise suppression. Through extensive comparisons on visual
quality and feature restoration, we confirm the superiority of the proposed
approach. Consequently, the GAN-RS can adaptively improve underwater visual
quality in real time and induce an overall superior restoration performance.
Finally, a real-world experiment is conducted on the seabed for grasping marine
products, and the results are quite promising. The source code is publicly
available at https://github.com/SeanChenxy/GAN_RS
Marine Wireless Big Data: Efficient Transmission, Related Applications, and Challenges
The vast volume of marine wireless sampling data and its continuously
explosive growth herald the coming of the era of marine wireless big data. Two
challenges imposed by these data are how to fast, reliably, and sustainably
deliver them in extremely hostile marine environments and how to apply them
after collection. In this article, we first propose an architecture of
heterogeneous marine networks that flexibly exploits the existing underwater
wireless techniques as a potential solution for fast data transmission. We then
investigate the possibilities of and develop the schemes for energy-efficient
and reliable undersea transmission without or slightly with data rate
reduction. After discussing the data transmission, we summarize the possible
applications of the collected big data and particularly focus on the problems
of applying these data in sea-surface object detection and marine object
recognition. Open issues and challenges that need to be further explored
regarding transmission and detection/recognition are also discussed in the
article.Comment: 7 pages, 5 figures, accepted by the IEEE Wireless Communication
Depth Estimation on Underwater Omni-directional Images Using a Deep Neural Network
In this work, we exploit a depth estimation Fully Convolutional Residual
Neural Network (FCRN) for in-air perspective images to estimate the depth of
underwater perspective and omni-directional images. We train one conventional
and one spherical FCRN for underwater perspective and omni-directional images,
respectively. The spherical FCRN is derived from the perspective FCRN via a
spherical longitude-latitude mapping. For that, the omni-directional camera is
modeled as a sphere, while images captured by it are displayed in the
longitude-latitude form. Due to the lack of underwater datasets, we synthesize
images in both data-driven and theoretical ways, which are used in training and
testing. Finally, experiments are conducted on these synthetic images and
results are displayed in both qualitative and quantitative way. The comparison
between ground truth and the estimated depth map indicates the effectiveness of
our method.Comment: 7 pages, 8 figures, 1 table, accepted by 2019 ICRA workshop
"Underwater Robotics Perception
- …