16,005 research outputs found
A Comparative Study of Quality and Content-Based Spatial Pooling Strategies in Image Quality Assessment
The process of quantifying image quality consists of engineering the quality
features and pooling these features to obtain a value or a map. There has been
a significant research interest in designing the quality features but pooling
is usually overlooked compared to feature design. In this work, we compare the
state of the art quality and content-based spatial pooling strategies and show
that although features are the key in any image quality assessment, pooling
also matters. We also propose a quality-based spatial pooling strategy that is
based on linearly weighted percentile pooling (WPP). Pooling strategies are
analyzed for squared error, SSIM and PerSIM in LIVE, multiply distorted LIVE
and TID2013 image databases.Comment: Paper: 5 pages, 8 figures, Presentation: 21 slides [Ancillary files
A Neural Network based Framework for Effective Laparoscopic Video Quality Assessment
Video quality assessment is a challenging problem having a critical
significance in the context of medical imaging. For instance, in laparoscopic
surgery, the acquired video data suffers from different kinds of distortion
that not only hinder surgery performance but also affect the execution of
subsequent tasks in surgical navigation and robotic surgeries. For this reason,
we propose in this paper neural network-based approaches for distortion
classification as well as quality prediction. More precisely, a Residual
Network (ResNet) based approach is firstly developed for simultaneous ranking
and classification task. Then, this architecture is extended to make it
appropriate for the quality prediction task by using an additional Fully
Connected Neural Network (FCNN). To train the overall architecture (ResNet and
FCNN models), transfer learning and end-to-end learning approaches are
investigated. Experimental results, carried out on a new laparoscopic video
quality database, have shown the efficiency of the proposed methods compared to
recent conventional and deep learning based approaches
Image Utility Assessment and a Relationship with Image Quality Assessment
International audiencePresent quality assessment (QA) algorithms aim to generate scores for natural images consistent with subjective scores for the quality assessment task. For the quality assessment task, human observers evaluate a natural image based on its perceptual resemblance to a reference. Natural images communicate useful information to humans, and this paper investigates the utility assessment task, where human observers evaluate the usefulness of a natural image as a surrogate for a reference. Current QA algorithms implicitly assess utility insofar as an image that exhibits strong perceptual resemblance to a reference is also of high utility. However, a perceived quality score is not a proxy for a perceived utility score: a decrease in perceived quality may not affect the perceived utility. Two experiments are conducted to investigate the relationship between the quality assessment and utility assessment tasks. The results from these experiments provide evidence that any algorithm optimized to predict perceived quality scores cannot immediately predict perceived utility scores. Several QA algorithms are evaluated in terms of their ability to predict subjective scores for the quality and utility assessment tasks. Among the QA algorithms evaluated, the visual information fidelity (VIF) criterion, which is frequently reported to provide the highest correlation with perceived quality, predicted both perceived quality and utility scores reasonably. The consistent performance of VIF for both the tasks raised suspicions in light of the evidence from the psychophysical experiments. A thorough analysis of VIF revealed that it artificially emphasizes evaluations at finer image scales (i.e., higher spatial frequencies) over those at coarser image scales (i.e., lower spatial frequencies). A modified implementation of VIF, denoted VIF*, is presented that provides statistically significant improvement over VIF for the quality assessment task and statistically worse performance for the utility assessment task. A novel utility assessment algorithm, referred to as the natural image contour evaluation (NICE), is introduced that conducts a comparison of the contours of a test image to those of a reference image across multiple image scales to score the test image. NICE demonstrates a viable departure from traditional QA algorithms that incorporate energy-based approaches and is capable of predicting perceived utility scores
Analysis of adversarial attacks against CNN-based image forgery detectors
With the ubiquitous diffusion of social networks, images are becoming a
dominant and powerful communication channel. Not surprisingly, they are also
increasingly subject to manipulations aimed at distorting information and
spreading fake news. In recent years, the scientific community has devoted
major efforts to contrast this menace, and many image forgery detectors have
been proposed. Currently, due to the success of deep learning in many
multimedia processing tasks, there is high interest towards CNN-based
detectors, and early results are already very promising. Recent studies in
computer vision, however, have shown CNNs to be highly vulnerable to
adversarial attacks, small perturbations of the input data which drive the
network towards erroneous classification. In this paper we analyze the
vulnerability of CNN-based image forensics methods to adversarial attacks,
considering several detectors and several types of attack, and testing
performance on a wide range of common manipulations, both easily and hardly
detectable
- …