10 research outputs found
Recommended from our members
Active sampling, scaling and dataset merging for large-scale image quality assessment
The field of subjective assessment is concerned with eliciting human judgements about a set of stimuli. Collecting such data is costly and time-consuming, especially when the subjective study is to be conducted in a controlled environment and using a specialized equipment. Thus, data from these studies are usually scarce. One of the areas, for which obtaining subjective measurements is difficult is image quality assessment. The results from these studies are used to develop and train automated or objective image quality metrics, which, with the advent of deep learning, require large amounts of versatile and heterogeneous data.
I present three main contributions in this dissertation. First, I propose a new active sampling method for efficient collection of pairwise comparisons in subjective assessment experiments. In these experiments observers are asked to express a preference between two conditions. However, many pairwise comparison protocols require a large number of comparisons to infer accurate scores, which may be unfeasible when each comparison is time-consuming (e.g. videos) or expensive (e.g. medical imaging). This motivates the use of an active sampling algorithm that chooses only the most informative pairs for comparison. I demonstrate, with real and synthetic data, that my algorithm offers the highest accuracy of inferred scores given a fixed number of measurements compared to the existing methods. Second, I propose a probabilistic framework to fuse the outcomes of different psychophysical experimental protocols, namely rating and pairwise comparisons experiments. Such a method can be used for merging existing datasets of subjective nature and for experiments in which both measurements are collected. Third, with a new dataset merging technique and by collecting additional cross-dataset quality comparisons I create a Unified Photometric Image Quality (UPIQ) dataset with over 4,000 images by realigning and merging existing high-dynamic-range (HDR) and standard-dynamic-range (SDR) datasets. The realigned quality scores share the same unified quality scale across all datasets. I then use the new dataset to retrain existing HDR metrics and show that the dataset is sufficiently large for training deep architectures. I show the utility of the dataset and metrics in an application to image compression that accounts for viewing conditions, including screen brightness and the viewing distance
From pairwise comparisons and rating to a unified quality scale.
The goal of psychometric scaling is the quantification of perceptual experiences, understanding the relationship between an external stimulus, the internal representation and the response. In this paper, we propose a probabilistic framework to fuse the outcome of different psychophysical experimental protocols, namely rating and pairwise comparisons experiments. Such a method can be used for merging existing datasets of subjective nature and for experiments in which both measurements are collected. We analyze and compare the outcomes of both types of experimental protocols in terms of time and accuracy in a set of simulations and experiments with benchmark and real-world image quality assessment datasets, showing the necessity of scaling and the advantages of each protocol and mixing. Although most of our examples focus on image quality assessment, our findings generalize to any other subjective quality-of-experience task.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n◦ 725253–EyeCode), from EPSRC research grant EP/P007902/1 and from a Science Foundation Ireland (SFI) research grant under the Grant Number 15/RP/2776. Marıa Pérez-Ortiz did part of this work while at the University of Cambridge and University College London (under MURI grant EPSRC 542892)
A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution
Limited GPU performance budgets and transmission bandwidths mean that real-time rendering often has to compromise on the spatial resolution or temporal resolution (refresh rate). A common practice is to keep either the resolution or the refresh rate constant and dynamically control the other variable. But this strategy is non-optimal when the velocity of displayed content varies. To find the best trade-off between the spatial resolution and refresh rate, we propose a perceptual visual model that predicts the quality of motion given an object velocity and predictability of motion. The model considers two motion artifacts to establish an overall quality score: non-smooth (juddery) motion, and blur. Blur is modeled as a combined effect of eye motion, finite refresh rate and display resolution. To fit the free parameters of the proposed visual model, we measured eye movement for predictable and unpredictable motion, and conducted psychophysical experiments to measure the quality of motion from 50 Hz to 165 Hz. We demonstrate the utility of the model with our on-the-fly motion-adaptive rendering algorithm that adjusts the refresh rate of a G-Sync-capable monitor based on a given rendering budget and observed object motion. Our psychophysical validation experiments demonstrate that the proposed algorithm performs better than constant-refresh-rate solutions, showing that motion-adaptive rendering is an attractive technique for driving variable-refresh-rate displays.</jats:p
Recommended from our members
Visually lossless image compression under varying display brightness and viewing distance
A dataset for visually lossless image compression is intended for evaluation of image artifact visibility metrics. The dataset contains visually lossless thresholds (VLTs) for images depicting a varied selection of contents and compressed with two codecs (JPEG and WebP), which were viewed on monitors of different peak luminance (10 cd/m^2 and 220 cd/m^2), and at
different viewing distance (30 pixels-per-degree (ppd) and 60 ppd).
Refer to the README.md and the paper for more details
Recommended from our members
Cross-content quality scaling of TID2013 image quality dataset
This dataset improves the accuracy and consistency of the
quality scores for the TID2013 image quality dataset:
http://www.ponomarenko.info/tid2013.htm (Version 1.0)
The details in this improved quality scores can be found in the paper:
Aliaksei Mikhailiuk, MarÃa Pérez Ortiz and RafaÅ‚ K. Mantiuk
"Psychometric scaling of TID2013 dataset"
Proc. of 10th International Conference on Quality of Multimedia Experience
(QoMEX 2018)
http://www.cl.cam.ac.uk/~rkm38/pdfs/mikhailiuk2018tid_psych_scaling.pdf. More details about the data files included is available in the README.txt file
Recommended from our members
UPIQ: Unified Photometric Image Quality dataset
Unified Photometric Image Quality dataset (UPIQ) UPIQ dataset is
intended for training and evaluation of full-reference HDR image
quality metrics.
The dataset contains 84 reference images and 4159 distorted images
from four datasets, TID2013 [1] (SDR), LIVE [2] (SDR), Narwaria et al. [3] (HDR)
and Korshunov et al. [4] (HDR). Quality scores were obtained by re-aligning
existing datasets to a common unified quality scale. This was achieved by collecting additional cross-dataset quality comparisons and re-scaling existing data with a psychometric scaling method. Images in the dataset are represented in absolute photometric and colorimetric units,
corresponding to light emitted from a display.
[1] Ponomarenko, N., Jin, L., Ieremeiev, O., Lukin, V., Egiazarian, K., Astola, J.,
Benoit: Image database tid2013: Peculiarities, results and perspectives. Signal Processing:
Image Communication 30, 57 – 77 (2015)
[2] Sheikh, H., Sabir, M., Bovik, A.: A Statistical Evaluation of Recent Full Reference
Image Quality Assessment Algorithms. IEEE Transactions on Image Processing 15(11),
3440–3451 (2006). https://doi.org/10.1109/TIP.2006.881959,
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1709988
[3] Narwaria, M., P. Da Silva, M., Le Callet, P., Pepion, R.: Tone mapping-based
high-dynamic-range image compression: study of optimization criterion and perceptual quality.
Optical Engineering 52(10) (2013). https://doi.org/10.1117/1.OE.52.10.102008
[4] Korshunov, P., Hanhart, P., Richter, T., Artusi, A., Mantiuk, R., Ebrahimi, T.:
Subjective quality assessment database of HDR images compressed with jpeg xt. In: 2015
Seventh International Workshop on Quality of Multimedia Experience (QoMEX). pp. 1–6 (May 2015).
https://doi.org/10.1109/QoMEX.2015.714811
Recommended from our members
UPIQ: Unified Photometric Image Quality dataset (04.2021)
Unified Photometric Image Quality dataset (UPIQ) UPIQ dataset is
intended for training and evaluation of full-reference HDR image
quality metrics.
The dataset contains 84 reference images and 4159 distorted images
from four datasets, TID2013 [1] (SDR), LIVE [2] (SDR), Narwaria et al. [3] (HDR)
and Korshunov et al. [4] (HDR). Quality scores were obtained by re-aligning
existing datasets to a common unified quality scale. This was
achieved by collecting additional cross-dataset quality comparisons
and re-scaling existing data with a psychometric scaling method. Images
in the dataset are represented in absolute photometric and colorimetric
units, corresponding to light emitted from a display.
This is an updated version of the dataset with the fixed pix_per_deg column. See README.md.
[1] Ponomarenko, N., Jin, L., Ieremeiev, O., Lukin, V., Egiazarian, K., Astola, J.,
Benoit: Image database tid2013: Peculiarities, results and perspectives. Signal Processing:
Image Communication 30, 57 – 77 (2015)
[2] Sheikh, H., Sabir, M., Bovik, A.: A Statistical Evaluation of Recent Full Reference
Image Quality Assessment Algorithms. IEEE Transactions on Image Processing 15(11),
3440–3451 (2006). https://doi.org/10.1109/TIP.2006.881959,
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1709988
[3] Narwaria, M., P. Da Silva, M., Le Callet, P., Pepion, R.: Tone mapping-based
high-dynamic-range image compression: study of optimization criterion and perceptual quality.
Optical Engineering 52(10) (2013). https://doi.org/10.1117/1.OE.52.10.102008
[4] Korshunov, P., Hanhart, P., Richter, T., Artusi, A., Mantiuk, R., Ebrahimi, T.:
Subjective quality assessment database of HDR images compressed with jpeg xt. In: 2015
Seventh International Workshop on Quality of Multimedia Experience (QoMEX). pp. 1–6 (May 2015).
https://doi.org/10.1109/QoMEX.2015.714811
Consolidated Dataset and Metrics for High-Dynamic-Range Image Quality
Increasing popularity of high-dynamic-range (HDR) image and video content
brings the need for metrics that could predict the severity of image
impairments as seen on displays of different brightness levels and dynamic
range. Such metrics should be trained and validated on a sufficiently large
subjective image quality dataset to ensure robust performance. As the existing
HDR quality datasets are limited in size, we created a Unified Photometric
Image Quality dataset (UPIQ) with over 4,000 images by realigning and merging
existing HDR and standard-dynamic-range (SDR) datasets. The realigned quality
scores share the same unified quality scale across all datasets. Such
realignment was achieved by collecting additional cross-dataset quality
comparisons and re-scaling data with a psychometric scaling method. Images in
the proposed dataset are represented in absolute photometric and colorimetric
units, corresponding to light emitted from a display. We use the new dataset to
retrain existing HDR metrics and show that the dataset is sufficiently large
for training deep architectures. We show the utility of the dataset on
brightness aware image compression