523 research outputs found

    Terahertz Security Image Quality Assessment by No-reference Model Observers

    Full text link
    To provide the possibility of developing objective image quality assessment (IQA) algorithms for THz security images, we constructed the THz security image database (THSID) including a total of 181 THz security images with the resolution of 127*380. The main distortion types in THz security images were first analyzed for the design of subjective evaluation criteria to acquire the mean opinion scores. Subsequently, the existing no-reference IQA algorithms, which were 5 opinion-aware approaches viz., NFERM, GMLF, DIIVINE, BRISQUE and BLIINDS2, and 8 opinion-unaware approaches viz., QAC, SISBLIM, NIQE, FISBLIM, CPBD, S3 and Fish_bb, were executed for the evaluation of the THz security image quality. The statistical results demonstrated the superiority of Fish_bb over the other testing IQA approaches for assessing the THz image quality with PLCC (SROCC) values of 0.8925 (-0.8706), and with RMSE value of 0.3993. The linear regression analysis and Bland-Altman plot further verified that the Fish__bb could substitute for the subjective IQA. Nonetheless, for the classification of THz security images, we tended to use S3 as a criterion for ranking THz security image grades because of the relatively low false positive rate in classifying bad THz image quality into acceptable category (24.69%). Interestingly, due to the specific property of THz image, the average pixel intensity gave the best performance than the above complicated IQA algorithms, with the PLCC, SROCC and RMSE of 0.9001, -0.8800 and 0.3857, respectively. This study will help the users such as researchers or security staffs to obtain the THz security images of good quality. Currently, our research group is attempting to make this research more comprehensive.Comment: 13 pages, 8 figures, 4 table

    Image Quality Assessment: Addressing the Data Shortage and Multi-Stage Distortion Challenges

    Get PDF
    Visual content constitutes the vast majority of the ever increasing global Internet traffic, thus highlighting the central role that it plays in our daily lives. The perceived quality of such content can be degraded due to a number of distortions that it may undergo during the processes of acquisition, storage, transmission under bandwidth constraints, and display. Since the subjective evaluation of such large volumes of visual content is impossible, the development of perceptually well-aligned and practically applicable objective image quality assessment (IQA) methods has taken on crucial importance to ensure the delivery of an adequate quality of experience to the end user. Substantial strides have been made in the last two decades in designing perceptual quality methods and three major paradigms are now well-established in IQA research, these being Full-Reference (FR), Reduced-Reference (RR), and No-Reference (NR), which require complete, partial, and no access to the pristine reference content, respectively. Notwithstanding the progress made so far, significant challenges are restricting the development of practically applicable IQA methods. In this dissertation we aim to address two major challenges: 1) The data shortage challenge, and 2) The multi-stage distortion challenge. NR or blind IQA (BIQA) methods usually rely on machine learning methods, such as deep neural networks (DNNs), to learn a quality model by training on subject-rated IQA databases. Due to constraints of subjective-testing, such annotated datasets are quite small-scale, containing at best a few thousands of images. This is in sharp contrast to the area of visual recognition where tens of millions of annotated images are available. Such a data challenge has become a major hurdle on the breakthrough of DNN-based IQA approaches. We address the data challenge by developing the largest IQA dataset, called the Waterloo Exploration-II database, which consists of 3,570 pristine and around 3.45 million distorted images which are generated by using content adaptive distortion parameters and consist of both singly and multiply distorted content. As a prerequisite requirement of developing an alternative annotation mechanism, we conduct the largest performance evaluation survey in the IQA area to-date to ascertain the top performing FR and fused FR methods. Based on the findings of this survey, we develop a technique called Synthetic Quality Benchmark (SQB), to automatically assign highly perceptual quality labels to large-scale IQA datasets. We train a DNN-based BIQA model, called EONSS, on the SQB-annotated Waterloo Exploration-II database. Extensive tests on a large collection of completely independent and subject-rated IQA datasets show that EONSS outperforms the very state-of-the-art in BIQA, both in terms of perceptual quality prediction performance and computation time, thereby demonstrating the efficacy of our approach to address the data challenge. In practical media distribution systems, visual content undergoes a number of degradations as it is transmitted along the delivery chain, making it multiply distorted. Yet, research in IQA has mainly focused on the simplistic case of singly distorted content. In many practical systems, apart from the final multiply distorted content, access to earlier degraded versions of such content is available. However, the three major IQA paradigms (FR, RR, and, NR) are unable to take advantage of this additional information. To address this challenge, we make one of the first attempts to study the behavior of multiple simultaneous distortion combinations in a two-stage distortion pipeline. Next, we introduce a new major IQA paradigm, called degraded reference (DR) IQA, to evaluate the quality of multiply distorted images by also taking into consideration their respective degraded references. We construct two datasets for the purpose of DR IQA model development, and call them DR IQA database V1 and V2. These datasets are designed on the pattern of the Waterloo Exploration-II database and have 32,912 SQB-annotated distorted images, composed of both singly distorted degraded references and multiply distorted content. We develop distortion behavior based and SVR-based DR IQA models. Extensive testing on an independent set of IQA datasets, including three subject-rated datasets, demonstrates that by utilizing the additional information available in the form of degraded references, the DR IQA models perform significantly better than their BIQA counterparts, thereby establishing DR IQA as a new paradigm in IQA

    Blind Quality Assessment for Image Superresolution Using Deep Two-Stream Convolutional Networks

    Full text link
    Numerous image superresolution (SR) algorithms have been proposed for reconstructing high-resolution (HR) images from input images with lower spatial resolutions. However, effectively evaluating the perceptual quality of SR images remains a challenging research problem. In this paper, we propose a no-reference/blind deep neural network-based SR image quality assessor (DeepSRQ). To learn more discriminative feature representations of various distorted SR images, the proposed DeepSRQ is a two-stream convolutional network including two subcomponents for distorted structure and texture SR images. Different from traditional image distortions, the artifacts of SR images cause both image structure and texture quality degradation. Therefore, we choose the two-stream scheme that captures different properties of SR inputs instead of directly learning features from one image stream. Considering the human visual system (HVS) characteristics, the structure stream focuses on extracting features in structural degradations, while the texture stream focuses on the change in textural distributions. In addition, to augment the training data and ensure the category balance, we propose a stride-based adaptive cropping approach for further improvement. Experimental results on three publicly available SR image quality databases demonstrate the effectiveness and generalization ability of our proposed DeepSRQ method compared with state-of-the-art image quality assessment algorithms

    NEW LEARNING FRAMEWORKS FOR BLIND IMAGE QUALITY ASSESSMENT MODEL

    Get PDF
    The focus of this thesis is on image quality assessment, specifically for problems of assessing the quality of an image blindly or without reference information. There are significant efforts over the last decade in developing objective blind models that can assess image quality as perceived by humans. Various models have been introduced, achieving highly competitive performances and high in correlation with subjective perceptual measures. However, there are still limitations on these models before they can be viable replacements to traditional image metrics over a wide range of image processing applications. This thesis addresses several limitations. The thesis first proposes a new framework to learn a blind image quality model with minimal training requirements, operates locally and has ability to identify distortion in the assessed image. To increase the model’s performance, the thesis then modifies the framework by considering an aspect of human vision tendency, which is often ignored by previous models. Finally, the thesis presents another framework that enable a model to simultaneously learn quality prediction for images affected by different distortion types

    Magnetic Resonance Image Quality Assessment by Using Non-Maximum Suppression and Entropy Analysis

    Get PDF
    An investigation of diseases using magnetic resonance (MR) imaging requires automatic image quality assessment methods able to exclude low-quality scans. Such methods can be also employed for an optimization of parameters of imaging systems or evaluation of image processing algorithms. Therefore, in this paper, a novel blind image quality assessment (BIQA) method for the evaluation of MR images is introduced. It is observed that the result of filtering using non-maximum suppression (NMS) strongly depends on the perceptual quality of an input image. Hence, in the method, the image is first processed by the NMS with various levels of acceptable local intensity difference. Then, the quality is efficiently expressed by the entropy of a sequence of extrema numbers obtained with the thresholded NMS. The proposed BIQA approach is compared with ten state-of-the-art techniques on a dataset containing MR images and subjective scores provided by 31 experienced radiologists. The Pearson, Spearman, Kendall correlation coefficients and root mean square error for the method assessing images in the dataset were 0.6741, 0.3540, 0.2428, and 0.5375, respectively. The extensive experimental evaluation of the BIQA methods reveals that the introduced measure outperforms related techniques by a large margin as it correlates better with human scores

    Image quality assessment based on the perceived structural similarity index of an image

    Get PDF
    Image quality assessment (IQA) has a very important role and wide applications in image acquisition, storage, transmission and processing. In designing IQA models, human visual system (HVS) characteristics introduced play an important role in improving their performances. In this paper, combining image distortion characteristics with HVS characteristics, based on the structure similarity index (SSIM) model, a novel IQA model based on the perceived structure similarity index (PSIM) of image is proposed. In the method, first, a perception model for HVS perceiving real images is proposed, combining the contrast sensitivity, frequency sensitivity, luminance nonlinearity and masking characteristics of HVS; then, in order to simulate HVS perceiving real image, the real images are processed with the proposed perception model, to eliminate their visual redundancy, thus, the perceived images are obtained; finally, based on the idea and modeling method of SSIM, combining with the features of perceived image, a novel IQA model, namely PSIM, is proposed. Further, in order to illustrate the performance of PSIM, 5335 distorted images with 41 distortion types in four image databases (TID2013, CSIQ, LIVE and CID) are used to simulate from three aspects: overall IQA of each database, IQA for each distortion type of images, and IQA for special distortion types of images. Further, according to the comprehensive benefit of precision, generalization performance and complexity, their IQA results are compared with those of 12 existing IQA models. The experimental results show that the accuracy (PLCC) of PSIM is 9.91% higher than that of SSIM in four databases, on average; and its performance is better than that of 12 existing IQA models. Synthesizing experimental results and theoretical analysis, it is showed that the proposed PSIM model is an effective and excellent IQA model

    Understanding perceived quality through visual representations

    Get PDF
    The formatting of images can be considered as an optimization problem, whose cost function is a quality assessment algorithm. There is a trade-off between bit budget per pixel and quality. To maximize the quality and minimize the bit budget, we need to measure the perceived quality. In this thesis, we focus on understanding perceived quality through visual representations that are based on visual system characteristics and color perception mechanisms. Specifically, we use the contrast sensitivity mechanisms in retinal ganglion cells and the suppression mechanisms in cortical neurons. We utilize color difference equations and color name distances to mimic pixel-wise color perception and a bio-inspired model to formulate center surround effects. Based on these formulations, we introduce two novel image quality estimators PerSIM and CSV, and a new image quality-assistance method BLeSS. We combine our findings from visual system and color perception with data-driven methods to generate visual representations and measure their quality. The majority of existing data-driven methods require subjective scores or degraded images. In contrast, we follow an unsupervised approach that only utilizes generic images. We introduce a novel unsupervised image quality estimator UNIQUE, and extend it with multiple models and layers to obtain MS-UNIQUE and DMS-UNIQUE. In addition to introducing quality estimators, we analyze the role of spatial pooling and boosting in image quality assessment.Ph.D

    Beyond the Ultra-deep Frontier Fields And Legacy Observations (BUFFALO): a high-resolution strong + weak-lensing view of Abell 370

    Full text link
    The HST treasury program BUFFALO provides extended wide-field imaging of the six Hubble Frontier Fields galaxy clusters. Here we present the combined strong and weak-lensing analysis of Abell 370, a massive cluster at z=0.375. From the reconstructed total projected mass distribution in the 6arcmin x 6arcmin BUFFALO field-of-view, we obtain the distribution of massive substructures outside the cluster core and report the presence of a total of seven candidates, each with mass 5×1013M\sim 5 \times 10^{13}M_{\odot}. Combining the total mass distribution derived from lensing with multi-wavelength data, we evaluate the physical significance of each candidate substructure, and conclude that 5 out of the 7 substructure candidates seem reliable, and that the mass distribution in Abell 370 is extended along the North-West and South-East directions. While this finding is in general agreement with previous studies, our detailed spatial reconstruction provides new insights into the complex mass distribution at large cluster-centric radius. We explore the impact of the extended mass reconstruction on the model of the cluster core and in particular, we attempt to physically explain the presence of an important external shear component, necessary to obtain a low root-mean-square separation between the model-predicted and observed positions of the multiple images in the cluster core. The substructures can only account for up to half the amplitude of the external shear, suggesting that more effort is needed to fully replace it by more physically motivated mass components. We provide public access to all the lensing data used as well as the different lens models.Comment: 29 pages, 17 figures, 3 table
    corecore