39,567 research outputs found

    Boosting in Image Quality Assessment

    Full text link
    In this paper, we analyze the effect of boosting in image quality assessment through multi-method fusion. Existing multi-method studies focus on proposing a single quality estimator. On the contrary, we investigate the generalizability of multi-method fusion as a framework. In addition to support vector machines that are commonly used in the multi-method fusion, we propose using neural networks in the boosting. To span different types of image quality assessment algorithms, we use quality estimators based on fidelity, perceptually-extended fidelity, structural similarity, spectral similarity, color, and learning. In the experiments, we perform k-fold cross validation using the LIVE, the multiply distorted LIVE, and the TID 2013 databases and the performance of image quality assessment algorithms are measured via accuracy-, linearity-, and ranking-based metrics. Based on the experiments, we show that boosting methods generally improve the performance of image quality assessment and the level of improvement depends on the type of the boosting algorithm. Our experimental results also indicate that boosting the worst performing quality estimator with two or more additional methods leads to statistically significant performance enhancements independent of the boosting technique and neural network-based boosting outperforms support vector machine-based boosting when two or more methods are fused.Comment: Paper: 6 pages, 5 tables, 1 figure, Presentation: 16 slides [Ancillary files

    Multi-measures fusion based on multi-objective genetic programming for full-reference image quality assessment

    Full text link
    In this paper, we exploit the flexibility of multi-objective fitness functions, and the efficiency of the model structure selection ability of a standard genetic programming (GP) with the parameter estimation power of classical regression via multi-gene genetic programming (MGGP), to propose a new fusion technique for image quality assessment (IQA) that is called Multi-measures Fusion based on Multi-Objective Genetic Programming (MFMOGP). This technique can automatically select the most significant suitable measures, from 16 full-reference IQA measures, used in aggregation and finds weights in a weighted sum of their outputs while simultaneously optimizing for both accuracy and complexity. The obtained well-performing fusion of IQA measures are evaluated on four largest publicly available image databases and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other state-of-the-art recently developed fusion approaches

    UNIQUE: Unsupervised Image Quality Estimation

    Full text link
    In this paper, we estimate perceived image quality using sparse representations obtained from generic image databases through an unsupervised learning approach. A color space transformation, a mean subtraction, and a whitening operation are used to enhance descriptiveness of images by reducing spatial redundancy; a linear decoder is used to obtain sparse representations; and a thresholding stage is used to formulate suppression mechanisms in a visual system. A linear decoder is trained with 7 GB worth of data, which corresponds to 100,000 8x8 image patches randomly obtained from nearly 1,000 images in the ImageNet 2013 database. A patch-wise training approach is preferred to maintain local information. The proposed quality estimator UNIQUE is tested on the LIVE, the Multiply Distorted LIVE, and the TID 2013 databases and compared with thirteen quality estimators. Experimental results show that UNIQUE is generally a top performing quality estimator in terms of accuracy, consistency, linearity, and monotonic behavior.Comment: 12 pages, 5 figures, 2 table

    UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content

    Full text link
    Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms. Accordingly, there is a great need for accurate video quality assessment (VQA) models for UGC/consumer videos to monitor, control, and optimize this vast content. Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of UGC content are unpredictable, complicated, and often commingled. Here we contribute to advancing the UGC-VQA problem by conducting a comprehensive evaluation of leading no-reference/blind VQA (BVQA) features and models on a fixed evaluation architecture, yielding new empirical insights on both subjective video quality studies and VQA model design. By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models to create a new fusion-based BVQA model, which we dub the \textbf{VID}eo quality \textbf{EVAL}uator (VIDEVAL), that effectively balances the trade-off between VQA performance and efficiency. Our experimental results show that VIDEVAL achieves state-of-the-art performance at considerably lower computational cost than other leading models. Our study protocol also defines a reliable benchmark for the UGC-VQA problem, which we believe will facilitate further research on deep learning-based VQA modeling, as well as perceptually-optimized efficient UGC video processing, transcoding, and streaming. To promote reproducible research and public evaluation, an implementation of VIDEVAL has been made available online: \url{https://github.com/tu184044109/VIDEVAL_release}.Comment: 13 pages, 11 figures, 11 table

    No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement

    Full text link
    No-reference image quality assessment (NR-IQA) aims to measure the image quality without reference image. However, contrast distortion has been overlooked in the current research of NR-IQA. In this paper, we propose a very simple but effective metric for predicting quality of contrast-altered images based on the fact that a high-contrast image is often more similar to its contrast enhanced image. Specifically, we first generate an enhanced image through histogram equalization. We then calculate the similarity of the original image and the enhanced one by using structural-similarity index (SSIM) as the first feature. Further, we calculate the histogram based entropy and cross entropy between the original image and the enhanced one respectively, to gain a sum of 4 features. Finally, we learn a regression module to fuse the aforementioned 5 features for inferring the quality score. Experiments on four publicly available databases validate the superiority and efficiency of the proposed technique.Comment: Draft versio

    A Probabilistic Quality Representation Approach to Deep Blind Image Quality Prediction

    Full text link
    Blind image quality assessment (BIQA) remains a very challenging problem due to the unavailability of a reference image. Deep learning based BIQA methods have been attracting increasing attention in recent years, yet it remains a difficult task to train a robust deep BIQA model because of the very limited number of training samples with human subjective scores. Most existing methods learn a regression network to minimize the prediction error of a scalar image quality score. However, such a scheme ignores the fact that an image will receive divergent subjective scores from different subjects, which cannot be adequately represented by a single scalar number. This is particularly true on complex, real-world distorted images. Moreover, images may broadly differ in their distributions of assigned subjective scores. Recognizing this, we propose a new representation of perceptual image quality, called probabilistic quality representation (PQR), to describe the image subjective score distribution, whereby a more robust loss function can be employed to train a deep BIQA model. The proposed PQR method is shown to not only speed up the convergence of deep model training, but to also greatly improve the achievable level of quality prediction accuracy relative to scalar quality score regression methods. The source code is available at https://github.com/HuiZeng/BIQA_Toolbox.Comment: Add the link of source cod

    Efficient No-Reference Quality Assessment and Classification Model for Contrast Distorted Images

    Full text link
    In this paper, an efficient Minkowski Distance based Metric (MDM) for no-reference (NR) quality assessment of contrast distorted images is proposed. It is shown that higher orders of Minkowski distance and entropy provide accurate quality prediction for the contrast distorted images. The proposed metric performs predictions by extracting only three features from the distorted images followed by a regression analysis. Furthermore, the proposed features are able to classify type of the contrast distorted images with a high accuracy. Experimental results on four datasets CSIQ, TID2013, CCID2014, and SIQAD show that the proposed metric with a very low complexity provides better quality predictions than the state-of-the-art NR metrics. The MATLAB source code of the proposed metric is available to public at http://www.synchromedia.ca/system/files/MDM.zip.Comment: 6 pages, 4 figures, 4 table

    Predictive No-Reference Assessment of Video Quality

    Full text link
    Among the various means to evaluate the quality of video streams, No-Reference (NR) methods have low computation and may be executed on thin clients. Thus, NR algorithms would be perfect candidates in cases of real-time quality assessment, automated quality control and, particularly, in adaptive mobile streaming. Yet, existing NR approaches are often inaccurate, in comparison to Full-Reference (FR) algorithms, especially under lossy network conditions. In this work, we present an NR method that combines machine learning with simple NR metrics to achieve a quality index comparably as accurate as the Video Quality Metric (VQM) Full-Reference algorithm. Our method is tested in an extensive dataset (960 videos), under lossy network conditions and considering nine different machine learning algorithms. Overall, we achieve an over 97% correlation with VQM, while allowing real-time assessment of video quality of experience in realistic streaming scenarios.Comment: 13 pages, 8 figures, IEEE Selected Topics on Signal Processin

    Image analysis and statistical modelling for measurement and quality assessment of ornamental horticulture crops in glasshouses

    Get PDF
    Image analysis for ornamental crops is discussed with examples from the bedding plant industry. Feed-forward artificial neural networks are used to segment top and side view images of three contrasting species of bedding plants. The segmented images provide objective measurements of leaf and flower cover, colour, uniformity and leaf canopy height. On each imaging occasion, each pack was scored for quality by an assessor panel and it is shown that image analysis can explain 88.5%, 81.7% and 70.4% of the panel quality scores for the three species, respectively. Stereoscopy for crop height and uniformity is outlined briefly. The methods discussed here could be used for crop grading at marketing or for monitoring and assessment of growing crops within a glasshouse during all stages of production

    A proposal project for a blind image quality assessment by learning distortions from the full reference image quality assessments

    Full text link
    This short paper presents a perspective plan to build a null reference image quality assessment. Its main goal is to deliver both the objective score and the distortion map for a given distorted image without the knowledge of its reference image.Comment: International Workshop on Quality of Multimedia Experience, 2012, Melbourne, Australi
    • …
    corecore