147,896 research outputs found

    BIQ2021: A Large-Scale Blind Image Quality Assessment Database

    Full text link
    The assessment of the perceptual quality of digital images is becoming increasingly important as a result of the widespread use of digital multimedia devices. Smartphones and high-speed internet are just two examples of technologies that have multiplied the amount of multimedia content available. Thus, obtaining a representative dataset, which is required for objective quality assessment training, is a significant challenge. The Blind Image Quality Assessment Database, BIQ2021, is presented in this article. By selecting images with naturally occurring distortions and reliable labeling, the dataset addresses the challenge of obtaining representative images for no-reference image quality assessment. The dataset consists of three sets of images: those taken without the intention of using them for image quality assessment, those taken with intentionally introduced natural distortions, and those taken from an open-source image-sharing platform. It is attempted to maintain a diverse collection of images from various devices, containing a variety of different types of objects and varying degrees of foreground and background information. To obtain reliable scores, these images are subjectively scored in a laboratory environment using a single stimulus method. The database contains information about subjective scoring, human subject statistics, and the standard deviation of each image. The dataset's Mean Opinion Scores (MOS) make it useful for assessing visual quality. Additionally, the proposed database is used to evaluate existing blind image quality assessment approaches, and the scores are analyzed using Pearson and Spearman's correlation coefficients. The image database and MOS are freely available for use and benchmarking

    Blind image quality evaluation using perception based features

    Get PDF
    This paper proposes a novel no-reference Perception-based Image Quality Evaluator (PIQUE) for real-world imagery. A majority of the existing methods for blind image quality assessment rely on opinion-based supervised learning for quality score prediction. Unlike these methods, we propose an opinion unaware methodology that attempts to quantify distortion without the need for any training data. Our method relies on extracting local features for predicting quality. Additionally, to mimic human behavior, we estimate quality only from perceptually significant spatial regions. Further, the choice of our features enables us to generate a fine-grained block level distortion map. Our algorithm is competitive with the state-of-the-art based on evaluation over several popular datasets including LIVE IQA, TID & CSIQ. Finally, our algorithm has low computational complexity despite working at the block-level

    Using the Natural Scenes’ Edges for Assessing Image Quality Blindly and Efficiently

    Get PDF
    Two real blind/no-reference (NR) image quality assessment (IQA) algorithms in the spatial domain are developed. To measure image quality, the introduced approach uses an unprecedented concept for gathering a set of novel features based on edges of natural scenes. The enhanced sensitivity of the human eye to the information carried by edge and contour of an image supports this claim. The effectiveness of the proposed technique in quantifying image quality has been studied. The gathered features are formed using both Weibull distribution statistics and two sharpness functions to devise two separate NR IQA algorithms. The presented algorithms do not need training on databases of human judgments or even prior knowledge about expected distortions, so they are real NR IQA algorithms. In contrast to the most general no-reference IQA, the model used for this study is generic and has been created in such a way that it is not specified to any particular distortion type. When testing the proposed algorithms on LIVE database, experiments show that they correlate well with subjective opinion scores. They also show that the introduced methods significantly outperform the popular full-reference peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) methods. Besides they outperform the recently developed NR natural image quality evaluator (NIQE) model

    Blind Quality Assessment for Image Superresolution Using Deep Two-Stream Convolutional Networks

    Full text link
    Numerous image superresolution (SR) algorithms have been proposed for reconstructing high-resolution (HR) images from input images with lower spatial resolutions. However, effectively evaluating the perceptual quality of SR images remains a challenging research problem. In this paper, we propose a no-reference/blind deep neural network-based SR image quality assessor (DeepSRQ). To learn more discriminative feature representations of various distorted SR images, the proposed DeepSRQ is a two-stream convolutional network including two subcomponents for distorted structure and texture SR images. Different from traditional image distortions, the artifacts of SR images cause both image structure and texture quality degradation. Therefore, we choose the two-stream scheme that captures different properties of SR inputs instead of directly learning features from one image stream. Considering the human visual system (HVS) characteristics, the structure stream focuses on extracting features in structural degradations, while the texture stream focuses on the change in textural distributions. In addition, to augment the training data and ensure the category balance, we propose a stride-based adaptive cropping approach for further improvement. Experimental results on three publicly available SR image quality databases demonstrate the effectiveness and generalization ability of our proposed DeepSRQ method compared with state-of-the-art image quality assessment algorithms
    corecore