1,404 research outputs found

    Super resolution and dynamic range enhancement of image sequences

    Get PDF
    Camera producers try to increase the spatial resolution of a camera by reducing size of sites on sensor array. However, shot noise causes the signal to noise ratio drop as sensor sites get smaller. This fact motivates resolution enhancement to be performed through software. Super resolution (SR) image reconstruction aims to combine degraded images of a scene in order to form an image which has higher resolution than all observations. There is a demand for high resolution images in biomedical imaging, surveillance, aerial/satellite imaging and high-definition TV (HDTV) technology. Although extensive research has been conducted in SR, attention has not been given to increase the resolution of images under illumination changes. In this study, a unique framework is proposed to increase the spatial resolution and dynamic range of a video sequence using Bayesian and Projection onto Convex Sets (POCS) methods. Incorporating camera response function estimation into image reconstruction allows dynamic range enhancement along with spatial resolution improvement. Photometrically varying input images complicate process of projecting observations onto common grid by violating brightness constancy. A contrast invariant feature transform is proposed in this thesis to register input images with high illumination variation. Proposed algorithm increases the repeatability rate of detected features among frames of a video. Repeatability rate is increased by computing the autocorrelation matrix using the gradients of contrast stretched input images. Presented contrast invariant feature detection improves repeatability rate of Harris corner detector around %25 on average. Joint multi-frame demosaicking and resolution enhancement is also investigated in this thesis. Color constancy constraint set is devised and incorporated into POCS framework for increasing resolution of color-filter array sampled images. Proposed method provides fewer demosaicking artifacts compared to existing POCS method and a higher visual quality in final image

    Spatio-spectral Image Reconstruction Using Non-local Filtering

    Full text link
    In many image processing tasks it occurs that pixels or blocks of pixels are missing or lost in only some channels. For example during defective transmissions of RGB images, it may happen that one or more blocks in one color channel are lost. Nearly all modern applications in image processing and transmission use at least three color channels, some of the applications employ even more bands, for example in the infrared and ultraviolet area of the light spectrum. Typically, only some pixels and blocks in a subset of color channels are distorted. Thus, other channels can be used to reconstruct the missing pixels, which is called spatio-spectral reconstruction. Current state-of-the-art methods purely rely on the local neighborhood, which works well for homogeneous regions. However, in high-frequency regions like edges or textures, these methods fail to properly model the relationship between color bands. Hence, this paper introduces non-local filtering for building a linear regression model that describes the inter-band relationship and is used to reconstruct the missing pixels. Our novel method is able to increase the PSNR on average by 2 dB and yields visually much more appealing images in high-frequency regions

    Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations

    Get PDF
    Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing

    Comprehensive retinal image analysis: image processing and feature extraction techniques oriented to the clinical task

    Get PDF
    Medical digital imaging has become a key element of modern health care procedures. It provides a visual documentation, a permanent record for the patients, and most importantly the ability to extract information about many diseases. Ophthalmology is a field that is heavily dependent on the analysis of digital images because they can aid in establishing an early diagnosis even before the first symptoms appear. This dissertation contributes to the digital analysis of such images and the problems that arise along the imaging pipeline, a field that is commonly referred to as retinal image analysis. We have dealt with and proposed solutions to problems that arise in retinal image acquisition and longitudinal monitoring of retinal disease evolution. Specifically, non-uniform illumination, poor image quality, automated focusing, and multichannel analysis. However, there are many unavoidable situations in which images of poor quality, like blurred retinal images because of aberrations in the eye, are acquired. To address this problem we have proposed two approaches for blind deconvolution of blurred retinal images. In the first approach, we consider the blur to be space-invariant and later in the second approach we extend the work and propose a more general space-variant scheme. For the development of the algorithms we have built preprocessing solutions that have enabled the extraction of retinal features of medical relevancy, like the segmentation of the optic disc and the detection and visualization of longitudinal structural changes in the retina. Encouraging experimental results carried out on real retinal images coming from the clinical setting demonstrate the applicability of our proposed solutions

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions

    Segment-based image matting using inpainting to resolve ambiguities

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 209-211).Image Matting and Compositing [6, 25] - the extraction of a foreground element from an image and overlaying it over a different background - are two important operations in digital image manipulation. The extraction of the foreground element and its composition over an existing background is performed using a mask known as an alpha matte, which is generated by Image Matting. The problem of Image Matting is inherently ill-posed and has no "correct" solution; however, several matting algorithms have been proposed. This thesis studies the popular Bayesian Matting [9] algorithm in detail, and documents several problems with regard to its efficiency and accuracy. Inspired by these problems, this thesis proposes two major ideas: Firstly, a new Segment-Based Matting Algorithm that incorporates shading and has a closed form solution. Secondly, a new general approach that uses Digital Inpainting - the technique of restoring defective areas in digital images - to resolve ambiguous areas in the alpha mattes. This thesis demonstrates that the combination of these ideas improves both the efficiency and accuracy of Image Matting. From the results obtained, this thesis proposes the following idea: The degree of local smoothness enforced in the alpha matte should depend on the local color distribution; the more similar the local foreground and background color distributions are, the greater the amount of smoothness enforced.by Heng Ping Nabil Christopher Moh.M.Eng
    corecore