32 research outputs found

    Framework for reproducible objective video quality research with case study on PSNR implementations

    Get PDF
    Reproducibility is an important and recurrent issue in objective video quality research because the presented algorithms are complex, depend on specific implementations in software packages or their parameters need to be trained on a particular, sometimes unpublished, dataset. Textual descriptions often lack the required detail and even for the simple Peak Signal to Noise Ratio (PSNR) several mutations exist for images and videos, in particular considering the choice of the peak value and the temporal pooling. This work presents results achieved through the analysis of objective video quality measures evaluated on a reproducible large scale database containing about 60,000 HEVC coded video sequences. We focus on PSNR, one of the most widespread measures, considering its two most common definitions. The sometimes largely different results achieved by applying the two definitions highlight the importance of the strict reproducibility of the research in video quality evaluation in particular. Reproducibility is also often a question of computational power and PSNR is a computationally inexpensive algorithm running faster than realtime. Complex algorithms cannot be reasonably developed and evaluated on the abovementioned 160 hours of video sequences. Therefore, techniques to select subsets of coding parameters are then introduced. Results show that an accurate selection can preserve the variety of the results seen on the large database but with much lower complexity. Finally, note that our SoftwareX accompanying paper presents the software framework which allows the full reproducibility of all the research results presented here, as well as how the same framework can be used to produce derived work for other measures or indexes proposed by other researchers which we strongly encourage for integration in our open framework

    Desarrollo de una herramienta para la medida de calidad de vĂ­deo

    Get PDF
    Este trabajo se centra en el estudio de las métricas que se utilizan para evaluar la calidad de secuencias de vídeo. En el capítulo 2 veremos que los métodos para evaluar la calidad de una secuencia pueden clasificarse en subjetivos y objetivos. Los subjetivos son precisos pero costosos en tiempo y recursos, los objetivos son imprecisos pero automatizables. Nos centraremos en éstos últimos, cuyo objetivó será lograr una precisión lo más cercana posible a la de los subjetivos. En el capítulo 3 se describirán diez métricas objetivas de calidad, en términos generales, y se proponen implementaciones en lenguaje Matlab de cada algoritmo. En el cuarto capítulo, compararemos la eficacia de los métodos vistos en el capítulo 3. Por último, el quinto capítulo corresponde a las conclusiones finales.Universidad de Sevilla. Grado en Ingeniería de las Tecnologías de Telecomunicació

    Investigating Prediction Accuracy of Full Reference Objective Video Quality Measures through the ITS4S Dataset

    Get PDF
    Large subjectively annotated datasets are crucial to the development and testing of objective video quality measures (VQMs). In this work we focus on the recently released ITS4S dataset. Relying on statistical tools, we show that the content of the dataset is rather heterogeneous from the point of view of quality assessment. Such diversity naturally makes the dataset a worthy asset to validate the accuracy of video quality metrics (VQMs). In particular we study the ability of VQMs to model the reduction or the increase of the visibility of distortion due to the spatial activity in the content. The study reveals that VQMs are likely to overestimate the perceived quality of processed video sequences whose source is characterized by few spatial details. We then propose an approach aiming at modeling the impact of spatial activity on distortion visibility when objectively assessing the visual quality of a content. The effectiveness of the proposal is validated on the ITS4S dataset as well as on the Netflix public dataset

    Image quality assessment using two-dimensional complex mel-cepstrum

    Get PDF
    Assessment of visual quality plays a crucial role in modeling, implementation, and optimization of image-and video-processing applications. The image quality assessment (IQA) techniques basically extract features from the images to generate objective scores. Feature-based IQA methods generally consist of two complementary phases: (1) feature extraction and (2) feature pooling. For feature extraction in the IQA framework, various algorithms have been used and recently, the two-dimensional (2-D) mel-cepstrum (2-DMC) feature extraction scheme has provided promising results in a feature-based IQA framework. However, the 2-DMC feature extraction scheme completely loses image-phase information that may contain high-frequency characteristics and important structural components of the image. In this work, "2-D complex mel-cepstrum" is proposed for feature extraction in an IQA framework. The method tries to integrate Fourier transform phase information into the 2-DMC, which was shown to be an efficient feature extraction scheme for assessment of image quality. Support vector regression is used for feature pooling that provides mapping between the proposed features and the subjective scores. Experimental results show that the proposed technique obtains promising results for the IQA problem by making use of the image-phase information. © 2016 SPIE and IS and T

    A cost-effective cloud computing framework for accelerating multimedia communication simulations

    Get PDF
    Multimedia communication research and development often requires computationally intensive simulations in order to develop and investigate the performance of new optimization algorithms. Depending on the simulations, they may require even a few days to test an adequate set of conditions due to the complexity of the algorithms. The traditional approach to speed up this type of relatively small simulations, which require several develop-simulate-reconfigure cycles, is indeed to run them in parallel on a few computers and leaving them idle when developing the technique for the next simulation cycle. This work proposes a new cost-effective framework based on cloud computing for accelerating the development process, in which resources are obtained on demand and paid only for their actual usage. Issues are addressed both analytically and practically running actual test cases, i.e., simulations of video communications on a packet lossy network, using a commercial cloud computing service. A software framework has also been developed to simplify the management of the virtual machines in the cloud. Results show that it is economically convenient to use the considered cloud computing service, especially in terms of reduced development time and costs, with respect to a solution using dedicated computers, when the development time is longer than one hour. If more development time is needed between simulations, the economic advantage progressively reduces as the computational complexity of the simulation increases

    Low bit Rate Video Quality Analysis Using NRDPF-VQA Algorithm

    Get PDF
    In this work, we propose NRDPF-VQA (No Reference Distortion Patch Features Video Quality Assessment) model aims to use to measure the video quality assessment for H.264/AVC (Advanced Video Coding). The proposed method takes advantage of the contrast changes in the video quality by luminance changes. The proposed quality metric was tested by using LIVE video database. The experimental results show that the new index performance compared with the other NR-VQA models that require training on LIVE video databases, CSIQ video database, and VQEG HDTV video database. The values are compared with human score index analysis of DMOS

    Hybrid video quality prediction: reviewing video quality measurement for widening application scope

    Get PDF
    A tremendous number of objective video quality measurement algorithms have been developed during the last two decades. Most of them either measure a very limited aspect of the perceived video quality or they measure broad ranges of quality with limited prediction accuracy. This paper lists several perceptual artifacts that may be computationally measured in an isolated algorithm and some of the modeling approaches that have been proposed to predict the resulting quality from those algorithms. These algorithms usually have a very limited application scope but have been verified carefully. The paper continues with a review of some standardized and well-known video quality measurement algorithms that are meant for a wide range of applications, thus have a larger scope. Their individual artifacts prediction accuracy is usually lower but some of them were validated to perform sufficiently well for standardization. Several difficulties and shortcomings in developing a general purpose model with high prediction performance are identified such as a common objective quality scale or the behavior of individual indicators when confronted with stimuli that are out of their prediction scope. The paper concludes with a systematic framework approach to tackle the development of a hybrid video quality measurement in a joint research collaboration.Polish National Centre for Research and Development (NCRD) SP/I/1/77065/10, Swedish Governmental Agency for Innovation Systems (Vinnova

    Influence of affective image content on subjective quality assessment

    No full text
    Image quality assessment (IQA) enables distortions introduced into an image (e.g., through lossy compression or broadcast) to be measured and evaluated for severity. It is unclear to what degree affective image content may influence this process. In this study, participants (n=25) were found to be unable to disentangle affective image content from objective image quality in a standard IQA procedure (single stimulus numerical categorical scale). We propose that this issue is worthy of consideration, particularly in single stimulus IQA techniques, in which a small number of handpicked images, not necessarily representative of the gamut of affect seen in true broadcasting, and unrated for affective content, serve as stimuli
    corecore