233,315 research outputs found

    Quality Assessment for CRT and LCD Color Reproduction Using a Blind Metric

    Get PDF
    This paper deals with image quality assessment that is capturing the focus of several research teams from academic and industrial parts. This field has an important role in various applications related to image from acquisition to projection. A large numbers of objective image quality metrics have been developed during the last decade. These metrics are more or less correlated to end-user feedback and can be separated in three categories: 1) Full Reference (FR) trying to evaluate the impairment in comparison to the reference image, 2) Reduced Reference (RR) using some features extracted from an image to represent it and compare it with the distorted one and 3) No Reference (NR) measures known as distortions such as blockiness, blurriness,. . .without the use of a reference. Unfortunately, the quality assessment community have not achieved a universal image quality model and only empiricalmodels established on psychophysical experimentation are generally used. In this paper, we focus only on the third category to evaluate the quality of CRT (Cathode Ray Tube) and LCD (Liquid Crystal Display) color reproduction where a blind metric is, based on modeling a part of the human visual system behavior. The objective results are validated by single-media and cross-media subjective tests. This allows to study the ability of simulating displays on a reference one

    Designing a Cockpit for Image Quality Evaluation

    Get PDF
    Image Quality (IQ) as assessed by humans is a concept hard to be defined, since it relies on many different features, including both low level and high level visual characteristics. Image luminance, contrast, color distribution, smoothness, presence of noise or of geometric distortions are some examples of low level cues usually contributing to image quality. Aesthetic canons and trends, displacement of the objects in the scene, significance and message of the imaged visual content are instances of the high level (i.e. semantic) concepts that may be involved in image quality assessment. Despite subjective evaluation of IQ being very popular in many applications (e.g. image restoration, colorization and noise removal), it may be scarcely reliable due to subjectivity issues and biases. Therefore, an objective evaluation, i.e. an image quality assessment based on visual features extracted from the image and mathematically modelled, is highly desirable, since it guarantees the repeatability of the results and it enables the automation of image quality measurements. Here the crucial point lies in the detection of visual elements salient for IQ. Many objective, numerical measures have been proposed in the literature. They differ from one another in the features considered to be relevant to IQ, and in the presence of a reference image, an image of \u201cperfect\u201d quality with which to compare the image to be evaluated. Objective measures are thus broadly classified as full-reference, reduced-reference or no-reference, according to the availability of reference information. Due to the complexity of the IQ assessment process, a single measure may be not robust and accurate enough to capture and numerically summarize all the aspects concurring to IQ. Therefore, we propose to employ multiple objective IQ measures assembled in a cockpit of objective IQ measures. This cockpit should be designed to offer not only an extensive analysis and overview of features relevant to IQ, but also as a tool to automate the selection of machine vision algorithms devoted to image enhancement. In this work we describe a preliminary version of a cockpit, and we employ it to assess a set of images of the same scene acquired under different conditions, with different devices or even processed by computer algorithms

    Wound Image Quality From a Mobile Health Tool for Home-Based Chronic Wound Management With Real-Time Quality Feedback: Randomized Feasibility Study

    Full text link
    BACKGROUND Travel to clinics for chronic wound management is burdensome to patients. Remote assessment and management of wounds using mobile and telehealth approaches can reduce this burden and improve patient outcomes. An essential step in wound documentation is the capture of wound images, but poor image quality can have a negative influence on the reliability of the assessment. To date, no study has investigated the quality of remotely acquired wound images and whether these are suitable for wound self-management and telemedical interpretation of wound status. OBJECTIVE Our goal was to develop a mobile health (mHealth) tool for the remote self-assessment of digital ulcers (DUs) in patients with systemic sclerosis (SSc). We aimed to define and validate objective measures for assessing the image quality, evaluate whether an automated feedback feature based on real-time assessment of image quality improves the overall quality of acquired wound images, and evaluate the feasibility of deploying the mHealth tool for home-based chronic wound self-monitoring by patients with SSc. METHODS We developed an mHealth tool composed of a wound imaging and management app, a custom color reference sticker, and a smartphone holder. We introduced 2 objective image quality parameters based on the sharpness and presence of the color checker to assess the quality of the image during acquisition and enable a quality feedback mechanism in an advanced version of the app. We randomly assigned patients with SSc and DU to the 2 device groups (basic and feedback) to self-document their DU at home over 8 weeks. The color checker detection ratio (CCDR) and color checker sharpness (CCS) were compared between the 2 groups. We evaluated the feasibility of the mHealth tool by analyzing the usability feedback from questionnaires, user behavior and timings, and the overall quality of the wound images. RESULTS A total of 21 patients were enrolled, of which 15 patients were included in the image quality analysis. The average CCDR was 0.96 (191/199) in the feedback group and 0.86 (158/183) in the basic group. The feedback group showed significantly higher (P<.001) CCS compared to the basic group. The usability questionnaire results showed that the majority of patients were satisfied with the tool, but could benefit from disease-specific adaptations. The median assessment duration was <50 seconds in all patients, indicating the mHealth tool was efficient to use and could be integrated into the daily routine of patients. CONCLUSIONS We developed an mHealth tool that enables patients with SSc to acquire good-quality DU images and demonstrated that it is feasible to deploy such an app in this patient group. The feedback mechanism improved the overall image quality. The introduced technical solutions consist of a further step towards reliable and trustworthy digital health for home-based self-management of wounds

    A statistical reduced-reference method for color image quality assessment

    Full text link
    Although color is a fundamental feature of human visual perception, it has been largely unexplored in the reduced-reference (RR) image quality assessment (IQA) schemes. In this paper, we propose a natural scene statistic (NSS) method, which efficiently uses this information. It is based on the statistical deviation between the steerable pyramid coefficients of the reference color image and the degraded one. We propose and analyze the multivariate generalized Gaussian distribution (MGGD) to model the underlying statistics. In order to quantify the degradation, we develop and evaluate two measures based respectively on the Geodesic distance between two MGGDs and on the closed-form of the Kullback Leibler divergence. We performed an extensive evaluation of both metrics in various color spaces (RGB, HSV, CIELAB and YCrCb) using the TID 2008 benchmark and the FRTV Phase I validation process. Experimental results demonstrate the effectiveness of the proposed framework to achieve a good consistency with human visual perception. Furthermore, the best configuration is obtained with CIELAB color space associated to KLD deviation measure

    On color image quality assessment using natural image statistics

    Full text link
    Color distortion can introduce a significant damage in visual quality perception, however, most of existing reduced-reference quality measures are designed for grayscale images. In this paper, we consider a basic extension of well-known image-statistics based quality assessment measures to color images. In order to evaluate the impact of color information on the measures efficiency, two color spaces are investigated: RGB and CIELAB. Results of an extensive evaluation using TID 2013 benchmark demonstrates that significant improvement can be achieved for a great number of distortion type when the CIELAB color representation is used

    Full Reference Objective Quality Assessment for Reconstructed Background Images

    Full text link
    With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated Database: https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing (Email for permissions at: ashrotreasuedu

    WESPE: Weakly Supervised Photo Enhancer for Digital Cameras

    Full text link
    Low-end and compact mobile cameras demonstrate limited photo quality mainly due to space, hardware and budget constraints. In this work, we propose a deep learning solution that translates photos taken by cameras with limited capabilities into DSLR-quality photos automatically. We tackle this problem by introducing a weakly supervised photo enhancer (WESPE) - a novel image-to-image Generative Adversarial Network-based architecture. The proposed model is trained by under weak supervision: unlike previous works, there is no need for strong supervision in the form of a large annotated dataset of aligned original/enhanced photo pairs. The sole requirement is two distinct datasets: one from the source camera, and one composed of arbitrary high-quality images that can be generally crawled from the Internet - the visual content they exhibit may be unrelated. Hence, our solution is repeatable for any camera: collecting the data and training can be achieved in a couple of hours. In this work, we emphasize on extensive evaluation of obtained results. Besides standard objective metrics and subjective user study, we train a virtual rater in the form of a separate CNN that mimics human raters on Flickr data and use this network to get reference scores for both original and enhanced photos. Our experiments on the DPED, KITTI and Cityscapes datasets as well as pictures from several generations of smartphones demonstrate that WESPE produces comparable or improved qualitative results with state-of-the-art strongly supervised methods
    corecore