1,553 research outputs found

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Multi-Task Learning Approach for Natural Images' Quality Assessment

    Get PDF
    Blind image quality assessment (BIQA) is a method to predict the quality of a natural image without the presence of a reference image. Current BIQA models typically learn their prediction separately for different image distortions, ignoring the relationship between the learning tasks. As a result, a BIQA model may has great prediction performance for natural images affected by one particular type of distortion but is less effective when tested on others. In this paper, we propose to address this limitation by training our BIQA model simultaneously under different distortion conditions using multi-task learning (MTL) technique. Given a set of training images, our Multi-Task Learning based Image Quality assessment (MTL-IQ) model first extracts spatial domain BIQA features. The features are then used as an input to a trace-norm regularisation based MTL framework to learn prediction models for different distortion classes simultaneously. For a test image of a known distortion, MTL-IQ selects a specific trained model to predict the image’s quality score. For a test image of an unknown distortion, MTLIQ first estimates the amount of each distortion present in the image using a support vector classifier. The probability estimates are then used to weigh the image prediction scores from different trained models. The weighted scores are then pooled to obtain the final image quality score. Experimental results on standard image quality assessment (IQA) databases show that MTL-IQ is highly correlated with human perceptual measures of image quality. It also obtained higher prediction performance in both overall and individual distortion cases compared to current BIQA models

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models
    • …
    corecore