1,549 research outputs found

    Analysis of wavelet-based full reference image quality assessment algorithm

    Get PDF
    Measurement of Image Quality plays an important role in numerous image processing applications such as forensic science, image enhancement, medical imaging, etc. In recent years, there is a growing interest among researchers in creating objective Image Quality Assessment (IQA) algorithms that can correlate well with perceived quality. A significant progress has been made for full reference (FR) IQA problem in the past decade. In this paper, we are comparing 5 selected FR IQA algorithms on TID2008 image datasets. The performance and evaluation results are shown in graphs and tables. The results of quantitative assessment showed wavelet-based IQA algorithm outperformed over the non-wavelet based IQA method except for WASH algorithm which the prediction value only outperformed for certain distortion types since it takes into account the essential structural data content of the image

    Edge Enhancement from Low-Light Image by Convolutional Neural Network and Sigmoid Function

    Get PDF
    Due to camera resolution or any lighting condition, captured image are generally over-exposed or under-exposed conditions. So, there is need of some enhancement techniques that improvise these artifacts from recorded pictures or images. So, the objective of image enhancement and adjustment techniques is to improve the quality and characteristics of an image. In general terms, the enhancement of image distorts the original numerical values of an image. Therefore, it is required to design such enhancement technique that do not compromise with the quality of the image. The optimization of the image extracts the characteristics of the image instead of restoring the degraded image. The improvement of the image involves the degraded image processing and the improvement of its visual aspect. A lot of research has been done to improve the image. Many research works have been done in this field. One among them is deep learning. Most of the existing contrast enhancement methods, adjust the tone curve to correct the contrast of an input image but doesn’t work efficiently due to limited amount of information contained in a single image. In this research, the CNN with edge adjustment is proposed. By applying CNN with Edge adjustment technique, the input low contrast images are capable to adapt according to high quality enhancement. The result analysis shows that the developed technique significantly advantages over existing methods

    Visual Quality Assessment and Blur Detection Based on the Transform of Gradient Magnitudes

    Get PDF
    abstract: Digital imaging and image processing technologies have revolutionized the way in which we capture, store, receive, view, utilize, and share images. In image-based applications, through different processing stages (e.g., acquisition, compression, and transmission), images are subjected to different types of distortions which degrade their visual quality. Image Quality Assessment (IQA) attempts to use computational models to automatically evaluate and estimate the image quality in accordance with subjective evaluations. Moreover, with the fast development of computer vision techniques, it is important in practice to extract and understand the information contained in blurred images or regions. The work in this dissertation focuses on reduced-reference visual quality assessment of images and textures, as well as perceptual-based spatially-varying blur detection. A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The proposed method requires a very small number of reduced-reference (RR) features. Extensive experiments performed on different benchmark databases demonstrate that the proposed RRIQA method, delivers highly competitive performance as compared with the state-of-the-art RRIQA models for both natural and texture images. In the context of texture, the effect of texture granularity on the quality of synthesized textures is studied. Moreover, two RR objective visual quality assessment methods that quantify the perceived quality of synthesized textures are proposed. Performance evaluations on two synthesized texture databases demonstrate that the proposed RR metrics outperforms full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in predicting the perceived visual quality of the synthesized textures. Last but not least, an effective approach to address the spatially-varying blur detection problem from a single image without requiring any knowledge about the blur type, level, or camera settings is proposed. The evaluations of the proposed approach on a diverse sets of blurry images with different blur types, levels, and content demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods qualitatively and quantitatively.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Stereoscopic video quality assessment using binocular energy

    Get PDF
    Stereoscopic imaging is becoming increasingly popular. However, to ensure the best quality of experience, there is a need to develop more robust and accurate objective metrics for stereoscopic content quality assessment. Existing stereoscopic image and video metrics are either extensions of conventional 2D metrics (with added depth or disparity information) or are based on relatively simple perceptual models. Consequently, they tend to lack the accuracy and robustness required for stereoscopic content quality assessment. This paper introduces full-reference stereoscopic image and video quality metrics based on a Human Visual System (HVS) model incorporating important physiological findings on binocular vision. The proposed approach is based on the following three contributions. First, it introduces a novel HVS model extending previous models to include the phenomena of binocular suppression and recurrent excitation. Second, an image quality metric based on the novel HVS model is proposed. Finally, an optimised temporal pooling strategy is introduced to extend the metric to the video domain. Both image and video quality metrics are obtained via a training procedure to establish a relationship between subjective scores and objective measures of the HVS model. The metrics are evaluated using publicly available stereoscopic image/video databases as well as a new stereoscopic video database. An extensive experimental evaluation demonstrates the robustness of the proposed quality metrics. This indicates a considerable improvement with respect to the state-of-the-art with average correlations with subjective scores of 0.86 for the proposed stereoscopic image metric and 0.89 and 0.91 for the proposed stereoscopic video metrics

    Low-Light Enhancement in the Frequency Domain

    Full text link
    Decreased visibility, intensive noise, and biased color are the common problems existing in low-light images. These visual disturbances further reduce the performance of high-level vision tasks, such as object detection, and tracking. To address this issue, some image enhancement methods have been proposed to increase the image contrast. However, most of them are implemented only in the spatial domain, which can be severely influenced by noise signals while enhancing. Hence, in this work, we propose a novel residual recurrent multi-wavelet convolutional neural network R2-MWCNN learned in the frequency domain that can simultaneously increase the image contrast and reduce noise signals well. This end-to-end trainable network utilizes a multi-level discrete wavelet transform to divide input feature maps into distinct frequencies, resulting in a better denoise impact. A channel-wise loss function is proposed to correct the color distortion for more realistic results. Extensive experiments demonstrate that our proposed R2-MWCNN outperforms the state-of-the-art methods quantitively and qualitatively.Comment: 8 page

    Low-complexity high prediction accuracy visual quality metrics and their applications in H.264/AVC encoding mode decision process

    Get PDF
    In this thesis, we develop a new general framework for computing full reference image quality scores in the discrete wavelet domain using the Haar wavelet. The proposed framework presents an excellent tradeoff between accuracy and complexity. In our framework, quality metrics are categorized as either map-based, which generate a quality (distortion) map to be pooled for the final score, e.g., structural similarity (SSIM), or non map-based, which only give a final score, e.g., Peak signal-to-noise ratio (PSNR). For mapbased metrics, the proposed framework defines a contrast map in the wavelet domain for pooling the quality maps. We also derive a formula to enable the framework to automatically calculate the appropriate level of wavelet decomposition for error-based metrics at a desired viewing distance. To consider the effect of very fine image details in quality assessment, the proposed method defines a multi-level edge map for each image, which comprises only the most informative image subbands. To clarify the application of the framework in computing quality scores, we give some examples showing how the framework can be applied to improve well-known metrics such as SSIM, visual information fidelity (VIF), PSNR, and absolute difference. We compare the complexity of various algorithms obtained by the framework to the Intel IPP-based H.264 baseline profile encoding using C/C++ implementations. We evaluate the overall performance of the proposed metrics, including their prediction accuracy, on two well-known image quality databases and one video quality database. All the simulation results confirm the efficiency of the proposed framework and quality assessment metrics in improving the prediction accuracy and also reduction of the computational complexity. For example, by using the framework, we can compute the VIF at about 5% of the complexity of its original version, but with higher accuracy. In the next step, we study how H.264 coding mode decision can benefit from our developed metrics. We integrate the proposed SSEA metric as the distortion measure inside the H.264 mode decision process. The H.264/AVC JM reference software is used as the implementation and verification platform. We propose a search algorithm to determine the Lagrange multiplier value for each quantization parameter (QP). The search is applied on three different types of video sequences having various motion activity features, and the resulting Lagrange multiplier values are tabulated for each of them. Based on our proposed Framework we propose a new quality metric PSNRA, and use it in this part (mode decision). The simulated rate-distortion (RD) curves show that at the same PSNRA, with the SSEA-based mode decision, the bitrate is reduced about 5% on average compared to the conventional SSE-based approach for the sequences with low and medium motion activities. It is notable that the computational complexity is not increased at all by using the proposed SSEA-based approach instead of the conventional SSE-based method. Therefore, the proposed mode decision algorithm can be used in real-time video coding
    • …
    corecore