4,879 research outputs found
Visual-Quality-Driven Learning for Underwater Vision Enhancement
The image processing community has witnessed remarkable advances in enhancing
and restoring images. Nevertheless, restoring the visual quality of underwater
images remains a great challenge. End-to-end frameworks might fail to enhance
the visual quality of underwater images since in several scenarios it is not
feasible to provide the ground truth of the scene radiance. In this work, we
propose a CNN-based approach that does not require ground truth data since it
uses a set of image quality metrics to guide the restoration learning process.
The experiments showed that our method improved the visual quality of
underwater images preserving their edges and also performed well considering
the UCIQE metric.Comment: Accepted for publication and presented in 2018 IEEE International
Conference on Image Processing (ICIP
Quality Enhancement for Underwater Images using Various Image Processing Techniques: A Survey
Underwater images are essential to identify the activity of underwater objects. It played a vital role to explore and utilizing aquatic resources. The underwater images have features such as low contrast, different noises, and object imbalance due to lack of light intensity. CNN-based in-deep learning approaches have improved underwater low-resolution photos during the last decade. Nevertheless, still, those techniques have some problems, such as high MSE, PSNT and high SSIM error rate. They solve the problem using different experimental analyses; various methods are studied that effectively treat different underwater image distorted scenes and improve contrast and color deviation compared to other algorithms. In terms of the color richness of the resulting images and the execution time, there are still deficiencies with the latest algorithm. In future work, the structure of our algorithm will be further adjusted to shorten the execution time, and optimization of the color compensation method under different color deviations will also be the focus of future research. With the wide application of underwater vision in different scientific research fields, underwater image enhancement can play an increasingly significant role in the process of image processing in underwater research and underwater archaeology. Most of the target images of the current algorithms are shallow water images. When the artificial light source is added to deep water images, the raw images will face more diverse noises, and image enhancement will face more challenges. As a result, this study investigates the numerous existing systems used for quality enhancement of underwater mages using various image processing techniques. We find various gaps and challenges of current systems and build the enhancement of this research for future improvement. Aa a result of this overview is to define the future problem statement to enhance this research and overcome the challenges faced by previous researchers. On other hand also improve the accuracy in terms of reducing MSE and enhancing PSNR etc
Real-time Model-based Image Color Correction for Underwater Robots
Recently, a new underwater imaging formation model presented that the
coefficients related to the direct and backscatter transmission signals are
dependent on the type of water, camera specifications, water depth, and imaging
range. This paper proposes an underwater color correction method that
integrates this new model on an underwater robot, using information from a
pressure depth sensor for water depth and a visual odometry system for
estimating scene distance. Experiments were performed with and without a color
chart over coral reefs and a shipwreck in the Caribbean. We demonstrate the
performance of our proposed method by comparing it with other statistic-,
physic-, and learning-based color correction methods. Applications for our
proposed method include improved 3D reconstruction and more robust underwater
robot navigation.Comment: Accepted at the 2019 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS
Development of remote sensing technology in New Zealand, part 1. Mapping land use and environmental studies in New Zealand, part 2. Indigenous forest assessment, part 3. Seismotectonic, structural, volcanologic and geomorphic study of New Zealand, part 4
The author has identified the following significant results. As part of the tape reformatting process, a simple coded picture output program was developed. This represents Pixel's radiance level by one of a 47 character set on a nonoverprinting line printer. It not only has aided in locating areas for the reformatting process, but has also formed the foundation for a supervised clustering package. This in turn has led to a simplistic but effective thematic mapping package
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption
The standard Neural Radiance Fields (NeRF) paradigm employs a viewer-centered
methodology, entangling the aspects of illumination and material reflectance
into emission solely from 3D points. This simplified rendering approach
presents challenges in accurately modeling images captured under adverse
lighting conditions, such as low light or over-exposure. Motivated by the
ancient Greek emission theory that posits visual perception as a result of rays
emanating from the eyes, we slightly refine the conventional NeRF framework to
train NeRF under challenging light conditions and generate normal-light
condition novel views unsupervised. We introduce the concept of a "Concealing
Field," which assigns transmittance values to the surrounding air to account
for illumination effects. In dark scenarios, we assume that object emissions
maintain a standard lighting level but are attenuated as they traverse the air
during the rendering process. Concealing Field thus compel NeRF to learn
reasonable density and colour estimations for objects even in dimly lit
situations. Similarly, the Concealing Field can mitigate over-exposed emissions
during the rendering stage. Furthermore, we present a comprehensive multi-view
dataset captured under challenging illumination conditions for evaluation. Our
code and dataset available at https://github.com/cuiziteng/Aleth-NeRFComment: AAAI 2024, code available at
https://cuiziteng.github.io/Aleth_NeRF_web/ Modified version of previous
paper arXiv:2303.0580
Novel deep learning architectures for marine and aquaculture applications
Alzayat Saleh's research was in the area of artificial intelligence and machine learning to autonomously recognise fish and their morphological features from digital images. Here he created new deep learning architectures that solved various computer vision problems specific to the marine and aquaculture context. He found that these techniques can facilitate aquaculture management and environmental protection. Fisheries and conservation agencies can use his results for better monitoring strategies and sustainable fishing practices
- …