698 research outputs found
A Perceptually Optimized and Self-Calibrated Tone Mapping Operator
With the increasing popularity and accessibility of high dynamic range (HDR)
photography, tone mapping operators (TMOs) for dynamic range compression are
practically demanding. In this paper, we develop a two-stage neural
network-based TMO that is self-calibrated and perceptually optimized. In Stage
one, motivated by the physiology of the early stages of the human visual
system, we first decompose an HDR image into a normalized Laplacian pyramid. We
then use two lightweight deep neural networks (DNNs), taking the normalized
representation as input and estimating the Laplacian pyramid of the
corresponding LDR image. We optimize the tone mapping network by minimizing the
normalized Laplacian pyramid distance (NLPD), a perceptual metric aligning with
human judgments of tone-mapped image quality. In Stage two, the input HDR image
is self-calibrated to compute the final LDR image. We feed the same HDR image
but rescaled with different maximum luminances to the learned tone mapping
network, and generate a pseudo-multi-exposure image stack with different detail
visibility and color saturation. We then train another lightweight DNN to fuse
the LDR image stack into a desired LDR image by maximizing a variant of the
structural similarity index for multi-exposure image fusion (MEF-SSIM), which
has been proven perceptually relevant to fused image quality. The proposed
self-calibration mechanism through MEF enables our TMO to accept uncalibrated
HDR images, while being physiology-driven. Extensive experiments show that our
method produces images with consistently better visual quality. Additionally,
since our method builds upon three lightweight DNNs, it is among the fastest
local TMOs.Comment: 20 pages,18 figure
Learned Perceptual Image Enhancement
Learning a typical image enhancement pipeline involves minimization of a loss
function between enhanced and reference images. While L1 and L2 losses are
perhaps the most widely used functions for this purpose, they do not
necessarily lead to perceptually compelling results. In this paper, we show
that adding a learned no-reference image quality metric to the loss can
significantly improve enhancement operators. This metric is implemented using a
CNN (convolutional neural network) trained on a large-scale dataset labelled
with aesthetic preferences of human raters. This loss allows us to conveniently
perform back-propagation in our learning framework to simultaneously optimize
for similarity to a given ground truth reference and perceptual quality. This
perceptual loss is only used to train parameters of image processing operators,
and does not impose any extra complexity at inference time. Our experiments
demonstrate that this loss can be effective for tuning a variety of operators
such as local tone mapping and dehazing
What You Hear Is What You See: Audio Quality Metrics From Image Quality Metrics
In this study, we investigate the feasibility of utilizing state-of-the-art
image perceptual metrics for evaluating audio signals by representing them as
spectrograms. The encouraging outcome of the proposed approach is based on the
similarity between the neural mechanisms in the auditory and visual pathways.
Furthermore, we customise one of the metrics which has a psychoacoustically
plausible architecture to account for the peculiarities of sound signals. We
evaluate the effectiveness of our proposed metric and several baseline metrics
using a music dataset, with promising results in terms of the correlation
between the metrics and the perceived quality of audio as rated by human
evaluators
Comparison of DCT, SVD and BFOA based multimodal biometric watermarking systems
AbstractDigital image watermarking is a major domain for hiding the biometric information, in which the watermark data are made to be concealed inside a host image imposing imperceptible change in the picture. Due to the advance in digital image watermarking, the majority of research aims to make a reliable improvement in robustness to prevent the attack. The reversible invisible watermarking scheme is used for fingerprint and iris multimodal biometric system. A novel approach is used for fusing different biometric modalities. Individual unique modalities of fingerprint and iris biometric are extracted and fused using different fusion techniques. The performance of different fusion techniques is evaluated and the Discrete Wavelet Transform fusion method is identified as the best. Then the best fused biometric template is watermarked into a cover image. The various watermarking techniques such as the Discrete Cosine Transform (DCT), Singular Value Decomposition (SVD) and Bacterial Foraging Optimization Algorithm (BFOA) are implemented to the fused biometric feature image. Performance of watermarking systems is compared using different metrics. It is found that the watermarked images are found robust over different attacks and they are able to reverse the biometric template for Bacterial Foraging Optimization Algorithm (BFOA) watermarking technique
Generalization of form in visual pattern classification.
Human observers were trained to criterion in classifying compound Gabor signals with sym- metry relationships, and were then tested with each of 18 blob-only versions of the learning set. General- ization to dark-only and light-only blob versions of the learning signals, as well as to dark-and-light blob versions was found to be excellent, thus implying virtually perfect generalization of the ability to classify mirror-image signals. The hypothesis that the learning signals are internally represented in terms of a 'blob code' with explicit labelling of contrast polarities was tested by predicting observed generalization behaviour in terms of various types of signal representations (pixelwise, Laplacian pyramid, curvature pyramid, ON/OFF, local maxima of Laplacian and curvature operators) and a minimum-distance rule. Most representations could explain generalization for dark-only and light-only blob patterns but not for the high-thresholded versions thereof. This led to the proposal of a structure-oriented blob-code. Whether such a code could be used in conjunction with simple classifiers or should be transformed into a propo- sitional scheme of representation operated upon by a rule-based classification process remains an open question
- …