479 research outputs found
The application of visual saliency models in objective image quality assessment: a statistical evaluation
Advances in image quality assessment have shown the potential added value of including visual attention aspects in its objective assessment. Numerous models of visual saliency are implemented and integrated in different image quality metrics (IQMs), but the gain in reliability of the resulting IQMs varies to a large extent. The causes and the trends of this variation would be highly beneficial for further improvement of IQMs, but are not fully understood. In this paper, an exhaustive statistical evaluation is conducted to justify the added value of computational saliency in objective image quality assessment, using 20 state-of-the-art saliency models and 12 best-known IQMs. Quantitative results show that the difference in predicting human fixations between saliency models is sufficient to yield a significant difference in performance gain when adding these saliency models to IQMs. However, surprisingly, the extent to which an IQM can profit from adding a saliency model does not appear to have direct relevance to how well this saliency model can predict human fixations. Our statistical analysis provides useful guidance for applying saliency models in IQMs, in terms of the effect of saliency model dependence, IQM dependence, and image distortion dependence. The testbed and software are made publicly available to the research community
Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground
We provide a comprehensive evaluation of salient object detection (SOD)
models. Our analysis identifies a serious design bias of existing SOD datasets
which assumes that each image contains at least one clearly outstanding salient
object in low clutter. The design bias has led to a saturated high performance
for state-of-the-art SOD models when evaluated on existing datasets. The
models, however, still perform far from being satisfactory when applied to
real-world daily scenes. Based on our analyses, we first identify 7 crucial
aspects that a comprehensive and balanced dataset should fulfill. Then, we
propose a new high quality dataset and update the previous saliency benchmark.
Specifically, our SOC (Salient Objects in Clutter) dataset, includes images
with salient and non-salient objects from daily object categories. Beyond
object category annotations, each salient image is accompanied by attributes
that reflect common challenges in real-world scenes. Finally, we report
attribute-based performance assessment on our dataset.Comment: ECCV 201
How is Gaze Influenced by Image Transformations? Dataset and Model
Data size is the bottleneck for developing deep saliency models, because
collecting eye-movement data is very time consuming and expensive. Most of
current studies on human attention and saliency modeling have used high quality
stereotype stimuli. In real world, however, captured images undergo various
types of transformations. Can we use these transformations to augment existing
saliency datasets? Here, we first create a novel saliency dataset including
fixations of 10 observers over 1900 images degraded by 19 types of
transformations. Second, by analyzing eye movements, we find that observers
look at different locations over transformed versus original images. Third, we
utilize the new data over transformed images, called data augmentation
transformation (DAT), to train deep saliency models. We find that label
preserving DATs with negligible impact on human gaze boost saliency prediction,
whereas some other DATs that severely impact human gaze degrade the
performance. These label preserving valid augmentation transformations provide
a solution to enlarge existing saliency datasets. Finally, we introduce a novel
saliency model based on generative adversarial network (dubbed GazeGAN). A
modified UNet is proposed as the generator of the GazeGAN, which combines
classic skip connections with a novel center-surround connection (CSC), in
order to leverage multi level features. We also propose a histogram loss based
on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in
terms of luminance distribution. Extensive experiments and comparisons over 3
datasets indicate that GazeGAN achieves the best performance in terms of
popular saliency evaluation metrics, and is more robust to various
perturbations. Our code and data are available at:
https://github.com/CZHQuality/Sal-CFS-GAN
Visual Quality Assessment and Blur Detection Based on the Transform of Gradient Magnitudes
abstract: Digital imaging and image processing technologies have revolutionized the way in which
we capture, store, receive, view, utilize, and share images. In image-based applications,
through different processing stages (e.g., acquisition, compression, and transmission), images
are subjected to different types of distortions which degrade their visual quality. Image
Quality Assessment (IQA) attempts to use computational models to automatically evaluate
and estimate the image quality in accordance with subjective evaluations. Moreover, with
the fast development of computer vision techniques, it is important in practice to extract
and understand the information contained in blurred images or regions.
The work in this dissertation focuses on reduced-reference visual quality assessment of
images and textures, as well as perceptual-based spatially-varying blur detection.
A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The
proposed method requires a very small number of reduced-reference (RR) features. Extensive
experiments performed on different benchmark databases demonstrate that the proposed
RRIQA method, delivers highly competitive performance as compared with the
state-of-the-art RRIQA models for both natural and texture images.
In the context of texture, the effect of texture granularity on the quality of synthesized
textures is studied. Moreover, two RR objective visual quality assessment methods that
quantify the perceived quality of synthesized textures are proposed. Performance evaluations
on two synthesized texture databases demonstrate that the proposed RR metrics outperforms
full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in
predicting the perceived visual quality of the synthesized textures.
Last but not least, an effective approach to address the spatially-varying blur detection
problem from a single image without requiring any knowledge about the blur type, level,
or camera settings is proposed. The evaluations of the proposed approach on a diverse
sets of blurry images with different blur types, levels, and content demonstrate that the
proposed algorithm performs favorably against the state-of-the-art methods qualitatively
and quantitatively.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Stereoscopic image quality assessment method based on binocular combination saliency model
The objective quality assessment of stereoscopic images plays an important role in three-dimensional (3D) technologies. In this paper, we propose an effective method to evaluate the quality of stereoscopic images that are afflicted by symmetric distortions. The major technical contribution of this paper is that the binocular combination behaviours and human 3D visual saliency characteristics are both considered. In particular, a new 3D saliency map is developed, which not only greatly reduces the computational complexity by avoiding calculation of the depth information, but also assigns appropriate weights to the image contents. Experimental results indicate that the proposed metric not only significantly outperforms conventional 2D quality metrics, but also achieves higher performance than the existing 3D quality assessment models
- …