17 research outputs found

    Full Reference Objective Video Quality Assessment with Temporal Consideration

    Get PDF
    Video quality assessment (VQA) is an extension of image quality assessment (IQA). A video is a series of images arranged in time sequence. Therefore, IQA methods can be used to assess videos quality. A video has three dimensional data; two for the spatial dimensions and one for the temporal dimension. IQA methods assess video quality by assessing spatial effects without the need to consider the temporal effects and distortions. This makes IQA methods inappropriate and maybe inaccurate for assessing video quality. In order to apply in real time scenarios, VQA methods have to be reliable and correlated well to the judgement of human visual system (HVS). Furthermore, they have to be computationally efficient to give fast results. Current VQA methods have good correlations with subjective scores but are high in terms of computational complexity. In this thesis, two VQA methods, Index1 and Index2, with lower computational complexity are proposed. Index1 deals with Just Noticeable Difference (JND) in both spatial and temporal parts of the video. For the temporal part, JND is combined with temporal information to account for temporal distortions. For Index2, it is based on the previous work of mean difference structural similarity index (MD-SSIM). The temporal part of Index2 deals with the variation of temporal information. Both of the proposed methods are then compared with state-of-theart VQA methods in terms of performance and computational complexity. The proposed methods were found to have acceptable performance with lower computational complexity

    New spatiotemporal method for assessing video quality

    Get PDF
    The existence of temporal effects and temporal distortions in a video differentiate the way it is assessed from an image. Temporal effects and distortions can enhance or depress the visibility of spatial effects in a video. Thus, the temporal part of videos plays a significant role in determining the video quality. In this study, a spatiotemporal video quality assessment (VQA) method is proposed due to the importance of temporal effects and distortions in assessing video quality. Instead of measuring the frame quality on a frame basis, the quality of several averaged frames is measured. The proposed spatiotemporal VQA method is significantly improved compared with image quality assessment (IQA) methods applied on a frame basis. When combined with IQA methods, the proposed spatiotemporal VQA method has comparable performance with state-of-the-art VQA methods. The computational complexity of the proposed temporal method is also lower when compared with current VQA methods

    Full-Reference Edge-Based Objective Quality Assessment of Natural and Screen Content Images

    Get PDF
    Nowadays, screen content images (SCIs) are gaining popularity other than natural images (NIs). Quality assessment (QA) methods are needed for these two types of images for better quality of experience. In this thesis, two generalized objective QA methods are proposed for NIs and SCIs, i.e. Curvelet based Method (CurM) and Edge Magnitude and Direction Method (EMaD). The modelling of a generalized QA method that works for both types of images is complicated since NIs and SCIs have dissimilar statistical properties. Moreover, some properties of NIs and SCIs are conflicting to one another and this makes the modelling more challenging. The proposed methods assess the perceptual quality of an image based on gradient information. For the CurM, the gradient information is extracted through Curvelet transform. The coefficients from Curvelet transform denote the gradient information in terms of magnitude and direction. Different from the usual practice, CurM considers the gradient direction in 360 degree. On the other hand, EMD filters the images with Prewitt kernel to obtain the edge information and direction. Through the filter results, the image is classified into low and high gradient regions. For the high gradient regions, they are filtered again with bigger kernel size. After extracting the gradient information from the two methods, the gradient information extracted from reference and targeted images are compared to compute a similarity score. This score indicates the quality of the targeted image compared to the reference image. From the performance comparison, it is shown that the proposed methods could assess the perceived quality of NIs and SCIs with high accuracy where CurM and EMaD achieve the weighted average of 0.9063 and 0.9124 respectively in Spearman correlation coefficients for LIVE, SIQAD, and SCID databases

    Temporal video quality assessment method involving structural similarity index

    Get PDF
    In this paper, a video quality assessment (VQA) that focuses on temporal part is being proposed. It is based on Structural Similarity Index (SSIM) and the work of previous researchers where differences of the frames are being measured. The proposed VQA possesses lower computational complexity and acceptable performance compared to state-of-the-art VQA methods

    Video quality assessment method: MD-SSIM

    No full text
    In this paper, video quality assessment (VQA) for compression losses is the main focus. A new method, MD-SSIM (Mean Squared Error Difference SSIM) is used for detecting the spatial distortion. For the temporal part, the differences of the SSIM scores of each frame are used to form the quality scores. In addition, this method has higher computational speed and competent performance as compared to other existing algorithms

    An error-based video quality assessment method with temporal information

    Get PDF
    t Videos are amongst the most popular online media for Internet users nowadays. Thus, it is of utmost importance that the videos transmitted through the internet or other transmission media to have a minimal data loss and acceptable visual quality. Video quality assessment (VQA) is a useful tool to determine the quality of a video without human intervention. A new VQA method, termed as Error and Temporal Structural Similarity (EaTSS), is proposed in this paper. EaTSS is based on a combination of error signals, weighted Structural Similarity Index (SSIM) and difference of temporal information. The error signals are used to weight the computed SSIM map and subsequently to compute the quality score. This is a better alternative to the usual SSIM index, in which the quality score is computed as the average of the SSIM map. For the temporal information, the second-order time-differential information are used for quality score computation. From the experiments, EaTSS is found to have competitive performance and faster computational speed compared to other existing VQA algorithms

    A generalized quality assessment method for natural and screen content images

    No full text
    A generalized objective quality assessment method is proposed for natural images andscreen content images. Since natural images and screen content images have different sta-tistical properties, the modelling of a generalized quality assessment method that worksfor both types of images is complicated because some properties of natural imag es andscreen content images are conflicting to one another. The proposed method assesses theperceptual quality of an image based on edge magnitude and direction. In this method, animage is first separated into regions with high and low gradients. Gradient is used due tothe small perceptual span of the human visual system for textual content. For high gradientregions, small kernel size of Prewitt operators is used to obtain the gradient magnitude anddirection. Correspondingly, bigger kernel size of Prewitt operators is utilized for low gra-dient regions. Visual quality indices are computed from both regions and pooled to obtainthe final quality index. From the performance comparison, it is shown that the proposedmethod could assess the perceived quality of natural images and screen content imageswith high accuracy

    A Just Noticeable Diference‑Based Video Quality Assessment Method with Low Computational Complexity

    Get PDF
    A Just Noticeable Diference (JND)-based video quality assessment (VQA) method is proposed. This method, termed as JVQ, applies JND concept to structural similarity (SSIM) index to measure the spatial quality. JVQ incorporates three features, i.e. luminance adaptation, contrast masking, and texture masking. In JVQ, the concept of JND is refned and more features are considered. For the spatial part, minor distortions in the distorted frames are ignored and considered imperceptible. For the temporal part, SSIM index is simplifed and used to measure the temporal video quality. Then, a similar JND concept which comprises of temporal masking is also applied in the temporal quality evaluation. Pixels with large variation over time are considered as not distorted because the distortions in these pixels are hardly perceivable. The fnal JVQ index is the arithmetic mean of both spatial and temporal quality indices. JVQ is found to achieve good correlation with subjective scores. In addition, this method has low computational cost as compared to existing state-of-the-art metrics

    A Modified Structural Similarity Index With Low Computational Complexity

    No full text
    Structural Similarity Index (SSIM) has been a benchmark method for image quality assessment (IQA). This is due to its simplicity and good performance. In this paper, we propose a modified SSIM method that reduces the computational complexity with comparable performance. Instead of computing similarities on local windows, the proposed method computes global information similarities. The proposed method omits the luminance part similarities in SSIM due to its less crucial role in assessing image quality. From the presented results, the proposed method has a much lower computational time and comparable performance compared to SSIM

    An application of genetic algorithm for designing a Wiener-model controller to regulate the pH value in a pilot plant

    No full text
    Proceedings of the IEEE Conference on Evolutionary Computation, ICEC21055-106
    corecore