21,761 research outputs found

    Bridging the Gap Between Imaging Performance and Image Quality Measures

    Get PDF
    Imaging system performance measures and Image Quality Metrics (IQM) are reviewed from a systems engineering perspective, focusing on spatial quality of still image capture systems. We classify IQMs broadly as: Computational IQMs (CPIQM), Multivariate Formalism IQMs (MF-IQM), Image Fidelity Metrics (IF-IQM), and Signal Transfer Visual IQMs (STV-IQM). Comparison of each genre finds STV-IQMs well suited for capture system quality evaluation: they incorporate performance measures relevant to optical systems design, such as Modulation Transfer Function (MTF) and Noise-Power Spectrum (NPS); their bottom, modular approach enables system components to be optimised separately. We suggest that correlation between STV IQMs and observer quality scores is limited by three factors: current MTF and NPS measures do not characterize scene-dependent performance introduced by imaging system non-linearities; contrast sensitivity models employed do not account for contextual masking effects; cognitive factors are not considered. We hypothesise that implementation of scene and process-dependent MTF (SPD-MTF) and NPS (SPD-NPS) measures should mitigate errors originating from scene dependent system performance. Further, we propose implementation of contextual contrast detection and discrimination models to better represent low-level visual performance in image quality analysis. Finally, we discuss image quality optimization functions that may potentially close the gap between contrast detection/discrimination and quality

    Scene-Dependency of Spatial Image Quality Metrics

    Get PDF
    This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality. The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes). Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals. This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs. The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy. The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications

    A saliency dispersion measure for improving saliency-based image quality metrics

    Get PDF
    Objective image quality metrics (IQMs) potentially benefit from the addition of visual saliency. However, challenges to optimising the performance of saliency-based IQMs remain. A previous eye-tracking study has shown that gaze is concentrated in fewer places in images with highly salient features than in images lacking salient features. From this, it can be inferred that the former are more likely to benefit from adding a saliency term to an IQM. To understand whether these ideas still hold when using computational saliency instead of eyetracking data, we first conducted a statistical evaluation using 15 state of the art saliency models and 10 well-known IQMs. We then used the results to devise an algorithm which adaptively incorporates saliency in IQMs for natural scenes, based on saliency dispersion. Experimental results demonstrate this can give significant improvement

    A saliency dispersion measure for improving saliency-based image quality metrics

    Get PDF
    Objective image quality metrics (IQMs) potentially benefit from the addition of visual saliency. However, challenges to optimising the performance of saliency-based IQMs remain. A previous eye-tracking study has shown that gaze is concentrated in fewer places in images with highly salient features than in images lacking salient features. From this, it can be inferred that the former are more likely to benefit from adding a saliency term to an IQM. To understand whether these ideas still hold when using computational saliency instead of eyetracking data, we first conducted a statistical evaluation using 15 state of the art saliency models and 10 well-known IQMs. We then used the results to devise an algorithm which adaptively incorporates saliency in IQMs for natural scenes, based on saliency dispersion. Experimental results demonstrate this can give significant improvement

    Facial Image Verification and Quality Assessment System -FaceIVQA

    Get PDF
    Although several techniques have been proposed for predicting biometric system performance using quality values, many of the research works were based on no-reference assessment technique using a single quality attribute measured directly from the data. These techniques have proved to be inappropriate for facial verification scenarios and inefficient because no single quality attribute can sufficient measure the quality of a facial image. In this research work, a facial image verification and quality assessment framework (FaceIVQA) was developed. Different algorithms and methods were implemented in FaceIVQA to extract the faceness, pose, illumination, contrast and similarity quality attributes using an objective full-reference image quality assessment approach. Structured image verification experiments were conducted on the surveillance camera (SCface) database to collect individual quality scores and algorithm matching scores from FaceIVQA using three recognition algorithms namely principal component analysis (PCA), linear discriminant analysis (LDA) and a commercial recognition SDK. FaceIVQA produced accurate and consistent facial image assessment data. The Result shows that it accurately assigns quality scores to probe image samples. The resulting quality score can be assigned to images captured for enrolment or recognition and can be used as an input to quality-driven biometric fusion systems.DOI:http://dx.doi.org/10.11591/ijece.v3i6.503
    • …
    corecore