133 research outputs found

    Data Analysis in Multimedia Quality Assessment: Revisiting the Statistical Tests

    Full text link
    Assessment of multimedia quality relies heavily on subjective assessment, and is typically done by human subjects in the form of preferences or continuous ratings. Such data is crucial for analysis of different multimedia processing algorithms as well as validation of objective (computational) methods for the said purpose. To that end, statistical testing provides a theoretical framework towards drawing meaningful inferences, and making well grounded conclusions and recommendations. While parametric tests (such as t test, ANOVA, and error estimates like confidence intervals) are popular and widely used in the community, there appears to be a certain degree of confusion in the application of such tests. Specifically, the assumption of normality and homogeneity of variance is often not well understood. Therefore, the main goal of this paper is to revisit them from a theoretical perspective and in the process provide useful insights into their practical implications. Experimental results on both simulated and real data are presented to support the arguments made. A software implementing the said recommendations is also made publicly available, in order to achieve the goal of reproducible research

    Scalable image quality assessment with 2D mel-cepstrum and machine learning approach

    Get PDF
    Cataloged from PDF version of article.Measurement of image quality is of fundamental importance to numerous image and video processing applications. Objective image quality assessment (IQA) is a two-stage process comprising of the following: (a) extraction of important information and discarding the redundant one, (b) pooling the detected features using appropriate weights. These two stages are not easy to tackle due to the complex nature of the human visual system (HVS). In this paper, we first investigate image features based on two-dimensional (20) mel-cepstrum for the purpose of IQA. It is shown that these features are effective since they can represent the structural information, which is crucial for IQA. Moreover, they are also beneficial in a reduced-reference scenario where only partial reference image information is used for quality assessment. We address the second issue by exploiting machine learning. In our opinion, the well established methodology of machine learning/pattern recognition has not been adequately used for IQA so far; we believe that it will be an effective tool for feature pooling since the required weights/parameters can be determined in a more convincing way via training with the ground truth obtained according to subjective scores. This helps to overcome the limitations of the existing pooling methods, which tend to be over simplistic and lack theoretical justification. Therefore, we propose a new metric by formulating IQA as a pattern recognition problem. Extensive experiments conducted using six publicly available image databases (totally 3211 images with diverse distortions) and one video database (with 78 video sequences) demonstrate the effectiveness and efficiency of the proposed metric, in comparison with seven relevant existing metrics. (C) 2011 Elsevier Ltd. All rights reserved

    Saliency detection for stereoscopic images

    Get PDF
    International audienceSaliency detection techniques have been widely used in various 2D multimedia processing applications. Currently, the emerging applications of stereoscopic display require new saliency detection models for stereoscopic images. Different from saliency detection for 2D images, depth features have to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a new stereoscopic saliency detection framework based on the feature contrast of color, intensity, texture, and depth. Four types of features including color, luminance, texture, and depth are extracted from DC-T coefficients to represent the energy for image patches. A Gaussian model of the spatial distance between image patches is adopted for the consideration of local and global contrast calculation. A new fusion method is designed to combine the feature maps for computing the final saliency map for stereoscopic images. Experimental results on a recent eye tracking database show the superior performance of the proposed method over other existing ones in saliency estimation for 3D images

    ROBUSTNESS AND PREDICTION ACCURACY OF MACHINE LEARNING FOR OBJECTIVE VISUAL QUALITY ASSESSMENT

    Get PDF
    Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptimal. A Principal Component Regression based algorithm and a Feed Forward Neural Network are compared when pooling the Structural Similarity Index (SSIM) features perturbed with noise. The neural network adapts better with noise and intrinsically favours features according to their salient content

    Scalable image quality assessment with 2D mel-cepstrum and machine learning approach

    Get PDF
    Measurement of image quality is of fundamental importance to numerous image and video processing applications. Objective image quality assessment (IQA) is a two-stage process comprising of the following: (a) extraction of important information and discarding the redundant one, (b) pooling the detected features using appropriate weights. These two stages are not easy to tackle due to the complex nature of the human visual system (HVS). In this paper, we first investigate image features based on two-dimensional (2D) mel-cepstrum for the purpose of IQA. It is shown that these features are effective since they can represent the structural information, which is crucial for IQA. Moreover, they are also beneficial in a reduced-reference scenario where only partial reference image information is used for quality assessment. We address the second issue by exploiting machine learning. In our opinion, the well established methodology of machine learning/pattern recognition has not been adequately used for IQA so far; we believe that it will be an effective tool for feature pooling since the required weights/parameters can be determined in a more convincing way via training with the ground truth obtained according to subjective scores. This helps to overcome the limitations of the existing pooling methods, which tend to be over simplistic and lack theoretical justification. Therefore, we propose a new metric by formulating IQA as a pattern recognition problem. Extensive experiments conducted using six publicly available image databases (totally 3211 images with diverse distortions) and one video database (with 78 video sequences) demonstrate the effectiveness and efficiency of the proposed metric, in comparison with seven relevant existing metrics. © 2011 Elsevier Ltd. All rights reserved

    Objective and subjective evaluation of High Dynamic Range video compression

    Get PDF
    A number of High Dynamic Range (HDR) video compression algorithms proposed to date have either been developed in isolation or only-partially compared with each other. Previous evaluations were conducted using quality assessment error metrics, which for the most part were developed for qualitative assessment of Low Dynamic Range (LDR) videos. This paper presents a comprehensive objective and subjective evaluation conducted with six published HDR video compression algorithms. The objective evaluation was undertaken on a large set of 39 HDR video sequences using seven numerical error metrics namely: PSNR, logPSNR, puPSNR, puSSIM, Weber MSE, HDR-VDP and HDR-VQM. The subjective evaluation involved six short-listed sequences and two ranking-based subjective experiments with hidden reference at two different output bitrates with 32 participants each, who were tasked to rank distorted HDR video footage compared to an uncompressed version of the same footage. Results suggest a strong correlation between the objective and subjective evaluation. Also, non-backward compatible compression algorithms appear to perform better at lower output bit rates than backward compatible algorithms across the settings used in this evaluation

    Optimisation of age at first calving in Karan Fries cattle

    Get PDF
    The study was conducted on the performance records of age at first calving (AFC) spread over a period of 15 years on Karan Fries crossbred cattle maintained at Livestock Research Centre. Data of 676 cows were collected and analysed by Least Squares Technique to examine the effect of non-genetic factors on age at first calving. Period of birth was classified into 5 periods (I-V) and season of calving into 4 seasons (winter, summer, rainy and autumn) to see the effect of non-genetic factors on age at first calving. Effect of period of birth was significant on age at first calving while season of calving showed non-significant effect on age at first calving. The overall least squares mean of age at first calving was 1043.40±6.64 days. For the optimisation of age at first calving with regard to milk productivity, analysis was carried out by class interval method. Age at first calving was classified into 7 classes and its average means of milk yield were obtained by using Least Squares Technique where optimum level of age at first calving was obtained at 885–1100 days based on higher milk yield and numbers of animal observed in different classes. From the study, it was concluded that optimum age at first calving could be achieved through proper nutrition and management practices. However, to determine the optimum level of age at first calving, much emphasis should be given to maximum profit rather than maximizing milk production
    corecore