896 research outputs found
A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement
Low-light images are not conducive to human observation and computer vision
algorithms due to their low visibility. Although many image enhancement
techniques have been proposed to solve this problem, existing methods
inevitably introduce contrast under- and over-enhancement. Inspired by human
visual system, we design a multi-exposure fusion framework for low-light image
enhancement. Based on the framework, we propose a dual-exposure fusion
algorithm to provide an accurate contrast and lightness enhancement.
Specifically, we first design the weight matrix for image fusion using
illumination estimation techniques. Then we introduce our camera response model
to synthesize multi-exposure images. Next, we find the best exposure ratio so
that the synthetic image is well-exposed in the regions where the original
image is under-exposed. Finally, the enhanced result is obtained by fusing the
input image and the synthetic image according to the weight matrix. Experiments
show that our method can obtain results with less contrast and lightness
distortion compared to that of several state-of-the-art methods.Comment: Project website: https://baidut.github.io/BIMEF
Generation of High Dynamic Range Illumination from a Single Image for the Enhancement of Undesirably Illuminated Images
This paper presents an algorithm that enhances undesirably illuminated images
by generating and fusing multi-level illuminations from a single image.The
input image is first decomposed into illumination and reflectance components by
using an edge-preserving smoothing filter. Then the reflectance component is
scaled up to improve the image details in bright areas. The illumination
component is scaled up and down to generate several illumination images that
correspond to certain camera exposure values different from the original. The
virtual multi-exposure illuminations are blended into an enhanced illumination,
where we also propose a method to generate appropriate weight maps for the tone
fusion. Finally, an enhanced image is obtained by multiplying the equalized
illumination and enhanced reflectance. Experiments show that the proposed
algorithm produces visually pleasing output and also yields comparable
objective results to the conventional enhancement methods, while requiring
modest computational loads
Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions
Underwater image enhancement is such an important low-level vision task with
many applications that numerous algorithms have been proposed in recent years.
These algorithms developed upon various assumptions demonstrate successes from
various aspects using different data sets and different metrics. In this work,
we setup an undersea image capturing system, and construct a large-scale
Real-world Underwater Image Enhancement (RUIE) data set divided into three
subsets. The three subsets target at three challenging aspects for enhancement,
i.e., image visibility quality, color casts, and higher-level
detection/classification, respectively. We conduct extensive and systematic
experiments on RUIE to evaluate the effectiveness and limitations of various
algorithms to enhance visibility and correct color casts on images with
hierarchical categories of degradation. Moreover, underwater image enhancement
in practice usually serves as a preprocessing step for mid-level and high-level
vision tasks. We thus exploit the object detection performance on enhanced
images as a brand new task-specific evaluation criterion. The findings from
these evaluations not only confirm what is commonly believed, but also suggest
promising solutions and new directions for visibility enhancement, color
correction, and object detection on real-world underwater images.Comment: arXiv admin note: text overlap with arXiv:1712.04143 by other author
Enabling Pedestrian Safety using Computer Vision Techniques: A Case Study of the 2018 Uber Inc. Self-driving Car Crash
Human lives are important. The decision to allow self-driving vehicles
operate on our roads carries great weight. This has been a hot topic of debate
between policy-makers, technologists and public safety institutions. The recent
Uber Inc. self-driving car crash, resulting in the death of a pedestrian, has
strengthened the argument that autonomous vehicle technology is still not ready
for deployment on public roads. In this work, we analyze the Uber car crash and
shed light on the question, "Could the Uber Car Crash have been avoided?". We
apply state-of-the-art Computer Vision models to this highly practical
scenario. More generally, our experimental results are an evaluation of various
image enhancement and object recognition techniques for enabling pedestrian
safety in low-lighting conditions using the Uber crash as a case study.Comment: 10 pages, 8 figures, 3 table
Enhancing the Accuracy of Biometric Feature Extraction Fusion Using Gabor Filter and Mahalanobis Distance Algorithm
Biometric recognition systems have advanced significantly in the last decade
and their use in specific applications will increase in the near future. The
ability to conduct meaningful comparisons and assessments will be crucial to
successful deployment and increasing biometric adoption. The best modality used
as unimodal biometric systems are unable to fully address the problem of higher
recognition rate. Multimodal biometric systems are able to mitigate some of the
limitations encountered in unimodal biometric systems, such as
non-universality, distinctiveness, non-acceptability, noisy sensor data, spoof
attacks, and performance. More reliable recognition accuracy and performance
are achievable as different modalities were being combined together and
different algorithms or techniques were being used. The work presented in this
paper focuses on a bimodal biometric system using face and fingerprint. An
image enhancement technique (histogram equalization) is used to enhance the
face and fingerprint images. Salient features of the face and fingerprint were
extracted using the Gabor filter technique. A dimensionality reduction
technique was carried out on both images extracted features using a principal
component analysis technique. A feature level fusion algorithm (Mahalanobis
distance technique) is used to combine each unimodal feature together. The
performance of the proposed approach is validated and is effective.Comment: Focused on extraction of feature from two different modalities (face
and fingerprint) using Gabor filte
Natural Color Image Enhancement based on Modified Multiscale Retinex Algorithm and Performance Evaluation usingWavelet Energy
This paper presents a new color image enhancement technique based on modified
MultiScale Retinex(MSR) algorithm and visual quality of the enhanced images are
evaluated using a new metric, namely, wavelet energy. The color image
enhancement is achieved by down sampling the value component of HSV color space
converted image into three scales (normal, medium and fine) following the
contrast stretching operation. These down sampled value components are enhanced
using the MSR algorithm. The value component is reconstructed by averaging each
pixels of the lower scale image with that of the upper scale image subsequent
to up sampling the lower scale image. This process replaces dark pixel by the
average pixels of both the lower scale and upper scale, while retaining the
bright pixels. The quality of the reconstructed images in the proposed method
is found to be good and far better then the other researchers method. The
performance of the proposed scheme is evaluated using new wavelet domain based
assessment criterion, referred as wavelet energy. This scheme computes the
energy of both original and enhanced image in wavelet domain. The number of
edge details as well as wavelet energy is less in a poor quality image compared
with naturally enhanced image. Experimental results presented confirms that the
proposed wavelet energy based color image quality assessment technique
efficiently characterizes both the local and global details of enhanced image.Comment: 10 pages, 3 figures, Recent Advances in Intelligent Informatics
Advances in Intelligent Systems and Computing Volume 235, 2014, pp 83-9
Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6
A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored
Improved underwater image enhancement algorithms based on partial differential equations (PDEs)
The experimental results of improved underwater image enhancement algorithms
based on partial differential equations (PDEs) are presented in this report.
This second work extends the study of previous work and incorporating several
improvements into the revised algorithm. Experiments show the evidence of the
improvements when compared to previously proposed approaches and other
conventional algorithms found in the literature.Comment: 22 pages, 6 figure
Towards Real-Time Advancement of Underwater Visual Quality with GAN
Low visual quality has prevented underwater robotic vision from a wide range
of applications. Although several algorithms have been developed, real-time and
adaptive methods are deficient for real-world tasks. In this paper, we address
this difficulty based on generative adversarial networks (GAN), and propose a
GAN-based restoration scheme (GAN-RS). In particular, we develop a multi-branch
discriminator including an adversarial branch and a critic branch for the
purpose of simultaneously preserving image content and removing underwater
noise. In addition to adversarial learning, a novel dark channel prior loss
also promotes the generator to produce realistic vision. More specifically, an
underwater index is investigated to describe underwater properties, and a loss
function based on the underwater index is designed to train the critic branch
for underwater noise suppression. Through extensive comparisons on visual
quality and feature restoration, we confirm the superiority of the proposed
approach. Consequently, the GAN-RS can adaptively improve underwater visual
quality in real time and induce an overall superior restoration performance.
Finally, a real-world experiment is conducted on the seabed for grasping marine
products, and the results are quite promising. The source code is publicly
available at https://github.com/SeanChenxy/GAN_RS
Quality Enhancement for Underwater Images using Various Image Processing Techniques: A Survey
Underwater images are essential to identify the activity of underwater objects. It played a vital role to explore and utilizing aquatic resources. The underwater images have features such as low contrast, different noises, and object imbalance due to lack of light intensity. CNN-based in-deep learning approaches have improved underwater low-resolution photos during the last decade. Nevertheless, still, those techniques have some problems, such as high MSE, PSNT and high SSIM error rate. They solve the problem using different experimental analyses; various methods are studied that effectively treat different underwater image distorted scenes and improve contrast and color deviation compared to other algorithms. In terms of the color richness of the resulting images and the execution time, there are still deficiencies with the latest algorithm. In future work, the structure of our algorithm will be further adjusted to shorten the execution time, and optimization of the color compensation method under different color deviations will also be the focus of future research. With the wide application of underwater vision in different scientific research fields, underwater image enhancement can play an increasingly significant role in the process of image processing in underwater research and underwater archaeology. Most of the target images of the current algorithms are shallow water images. When the artificial light source is added to deep water images, the raw images will face more diverse noises, and image enhancement will face more challenges. As a result, this study investigates the numerous existing systems used for quality enhancement of underwater mages using various image processing techniques. We find various gaps and challenges of current systems and build the enhancement of this research for future improvement. Aa a result of this overview is to define the future problem statement to enhance this research and overcome the challenges faced by previous researchers. On other hand also improve the accuracy in terms of reducing MSE and enhancing PSNR etc
- …