394,603 research outputs found

    On color image quality assessment using natural image statistics

    Full text link
    Color distortion can introduce a significant damage in visual quality perception, however, most of existing reduced-reference quality measures are designed for grayscale images. In this paper, we consider a basic extension of well-known image-statistics based quality assessment measures to color images. In order to evaluate the impact of color information on the measures efficiency, two color spaces are investigated: RGB and CIELAB. Results of an extensive evaluation using TID 2013 benchmark demonstrates that significant improvement can be achieved for a great number of distortion type when the CIELAB color representation is used

    A statistical reduced-reference method for color image quality assessment

    Full text link
    Although color is a fundamental feature of human visual perception, it has been largely unexplored in the reduced-reference (RR) image quality assessment (IQA) schemes. In this paper, we propose a natural scene statistic (NSS) method, which efficiently uses this information. It is based on the statistical deviation between the steerable pyramid coefficients of the reference color image and the degraded one. We propose and analyze the multivariate generalized Gaussian distribution (MGGD) to model the underlying statistics. In order to quantify the degradation, we develop and evaluate two measures based respectively on the Geodesic distance between two MGGDs and on the closed-form of the Kullback Leibler divergence. We performed an extensive evaluation of both metrics in various color spaces (RGB, HSV, CIELAB and YCrCb) using the TID 2008 benchmark and the FRTV Phase I validation process. Experimental results demonstrate the effectiveness of the proposed framework to achieve a good consistency with human visual perception. Furthermore, the best configuration is obtained with CIELAB color space associated to KLD deviation measure

    Image Quality Assessment by Saliency Maps

    Get PDF
    Image Quality Assessment (IQA) is an interesting challenge for image processing applications. The goal of IQA is to replace human judgement of perceived image quality with a machine evaluation. A large number of methods have been proposed to evaluate the quality of an image which may be corrupted by noise, distorted during acquisition, transmission, compression, etc. Many methods, in some cases, do not agree with human judgment because they are not correlated with human visual perception. In the last years the most modern IQA models and metrics considered visual saliency as a fundamental issue. The aim of visual saliency is to produce a saliency map that replicates the human visual system (HVS) behaviour in visual attention process. In this paper we show the relationship between different kind of visual saliency maps and IQA measures. We particularly perform a lot of comparisons between Saliency-Based IQA Measures and traditional Objective IQA Measure. In Saliency scientific literature there are many different approaches for saliency maps, we want to investigate which is best one for IQA metrics

    Saliency-Aware Spatio-Temporal Artifact Detection for Compressed Video Quality Assessment

    Full text link
    Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques

    Influence of Auditory Cues on the visually-induced Self-Motion Illusion (Circular Vection) in Virtual Reality

    Get PDF
    This study investigated whether the visually induced selfmotion illusion (“circular vection”) can be enhanced by adding a matching auditory cue (the sound of a fountain that is also visible in the visual stimulus). Twenty observers viewed rotating photorealistic pictures of a market place projected onto a curved projection screen (FOV: 54°x45°). Three conditions were randomized in a repeated measures within-subject design: No sound, mono sound, and spatialized sound using a generic head-related transfer function (HRTF). Adding mono sound increased convincingness ratings marginally, but did not affect any of the other measures of vection or presence. Spatializing the fountain sound, however, improved vection (convincingness and vection buildup time) and presence ratings significantly. Note that facilitation was found even though the visual stimulus was of high quality and realism, and known to be a powerful vection-inducing stimulus. Thus, HRTF-based auralization using headphones can be employed to improve visual VR simulations both in terms of self-motion perception and overall presence

    Developmental coordination disorder: a focus on handwriting

    Get PDF
    Background. Developmental coordination disorder (DCD), is the term used to refer to children who present with motor coordination difficulties, unexplained by a general-medical condition, intellectual disability or known neurological impairment. Difficulties with handwriting are often included in descriptions of DCD, including that provided in DSM-5 (APA, 2013). However, surprisingly few studies have examined handwriting in DCD in a systematic way. Those that are available, have been conducted outside of the UK, in alphabets other than the Latin based alphabet. In order to gain a better understanding of the nature of 'slowness' so commonly reported in children with DCD, this thesis aimed to examine the handwriting of children with DCD in detail by considering the handwriting product, the process, the child's perspective, the teacher's perspective and some popular clinical measures including strength, visual perception and force variability. Compositional quality was also evaluated to examine the impact of poor handwriting on the wider task of writing. Method. Twenty-eight 8-14 year-old children with a diagnosis of DCD participated in the study, with 28 typically developing age and gender matched controls. Participants completed the four handwriting tasks from the Detailed Assessment of Speed of Handwriting (DASH) and wrote their own name; all on a digitising writing tablet. The number of words written, speed of pen movements and the time spent pausing during the tasks were calculated. Participants were also assessed in spelling, reading, receptive vocabulary, visual perception, visual motor integration, grip strength and the quality of their composition. Results. The findings confirmed what many professionals report, that children with DCD produce less text than their peers. However, this was not due to slow movement execution, but rather a higher percentage of time spent pausing, in particular, pauses over 10 seconds. The location of the pauses within words indicated a lack of automaticity in the handwriting of children with DCD. The DCD group scored below their peers on legibility, grip strength, measures of visual perception and had poorer compositional quality. Individual data highlighted heterogeneous performance profiles in children with DCD and there was little agreement/no significant association between teacher and therapist's measures of handwriting. Conclusions. A new model incorporating handwriting within the broader context of writing was proposed as a lens through which therapists can consider handwriting in children with DCD. The model incorporates the findings from this thesis and discusses avenues for future research in this area

    Variances on Students’ Blended Learning Perception According to Learning Style Preferences

    Get PDF
    Blended learning, particularly the use of online-based technologies provides teachers and learners opportunities for a more flexible teaching-learning environment based on individual learning preferences. This paper investigates on the variation of the learners’ perception on blended learning in terms of their learning styles. One hundred thirteen (113) students enrolled in Statistics during the second trimester of the school year 2012-2013 participated in the study. A blended learning environment (BLE) questionnaire was designed to determine the students’ blended learning perception and the Felder-Soloman Index of Learning Style (ILS) measures the students’ learning styles. Additional data were gathered from interviews and focus-group discussions to record students’ reactions to BLE.  Using SPSS, the data from each instrument was described and analyzed. Students’ views on blended learning revealed moderate to very high perception on items related to the ease-of-use and accessibility, quality of contents, usage and purpose, and general outcome. On the ILS, active-reflective dimension reported 55% active learners; sensing-intuitive dimension reported 61% preference on sensing learning; visual-verbal dimension revealed 47% visual learners and; sequential-global dimension showed 58% sequential preference. Overall, results revealed that students’ perception on blended learning differ among active-reflective and visual-verbal learners whereas learners classified as sensing-intuitive and sequential-global do not significantly vary in blended learning perception. It appears that teachers should still consider students’ learning style in the design, implementation and evaluation of blended learning. The study concluded with several future research directions in terms of the impact of teaching and learning styles on blended learning and evaluation of e-learning styles. Keywords: blended learning, blended learning perception, learning styles, Felder-Soloman Index of Learning Style (ILS

    Evaluation of HVS models in the application of medical quality assessment

    Get PDF
    In this study, four of the most widely used Human Visual System (HVS) models are applied on Magnetic Resonance (MR) images for signal detection task. Their performances are evaluated against gold standard derived from radiologists\u27 decisions. The task-based image quality assessment requires taking into account the human perception specificities, for which various HVS models have been proposed. Few works were conducted however to evaluate and compare the suitability of these models with respect to the assessment of medical image qualities. Here we propose to score the performance of each HVS model using the AUC and its variance estimates as the figure of merit. The contribution of this work is twofold: firstly the application of MRMC (multiple-reader, multiple-case) estimates independently of the HVS model\u27s output range, secondly the use of radiologists\u27 consensus as gold standard so that the estimated AUC measures the distance between the HVS model and the radiologist perception

    Sketch Plus Colorization Deep Convolutional Neural Networks for Photos Generation from Sketches

    Get PDF
    In this paper, we introduce a method to generate photos from sketches using Deep Convolutional Neural Networks (DCNN). This research proposes a method by combining a network to invert sketches into photos (sketch inversion net) with a network to predict color given grayscale images (colorization net). By using this method, the quality of generated photos is expected to be more similar to the actual photos. We first artificially constructed uncontrolled conditions for the dataset. The dataset, which consists of hand-drawn sketches and their corresponding photos, were pre-processed using several data augmentation techniques to train the models in addressing the issues of rotation, scaling, shape, noise, and positioning. Validation was measured using two types of similarity measurements: pixel- difference based and human visual system (HVS) which mimics human perception in evaluating the quality of an image. The pixel- difference based metric consists of Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) while the HVS consists of Universal Image Quality Index (UIQI) and Structural Similarity (SSIM). Our method gives the best quality of generated photos for all measures (844.04 for MSE, 19.06 for PSNR, 0.47 for UIQI, and 0.66 for SSIM)
    • …
    corecore