39 research outputs found

    HYPERSPECTRAL IMAGE DENOISING USING MULTIPLE LINEAR REGRESSION AND BIVARIATE SHRINKAGE WITH 2-D DUAL-TREE COMPLEX WAVELET IN THE SPECTRAL DERIVATIVE DOMAIN

    Get PDF
    In this paper, a new denoising method is proposed for hyperspectral remote sensing images, and tested on both the simulated and the real-life datacubes. Predicted datacube of the hyperspectral images is calculated by multiple linear regression in the spectral domain based on the strong spectral correlation of the useful signal and the inter-band uncorrelation of the random noise terms in hyperspectral images. A two dimensional dual-tree complex wavelet transform is performed in the spectral derivative domain, where the noise level is elevated temporarily to avoid signal deformation during the wavelet denoising, and then the bivariate shrinkage is used to shrink the wavelet coefficients. Simulated experimental results demonstrate that the proposed method obtains better results than the other denoising methods proposed in the reference, improves the signal to noise ratio up to 0.5dB to 10dB. The real-life data experiment shows that the proposed method is valid and effective

    Signal and Image Denoising Using Wavelet Transform

    Get PDF

    Effective denoising and classification of hyperspectral images using curvelet transform and singular spectrum analysis

    Get PDF
    Hyperspectral imaging (HSI) classification has become a popular research topic in recent years, and effective feature extraction is an important step before the classification task. Traditionally, spectral feature extraction techniques are applied to the HSI data cube directly. This paper presents a novel algorithm for HSI feature extraction by exploiting the curvelet transformed domain via a relatively new spectral feature processing technique – singular spectrum analysis (SSA). Although the wavelet transform has been widely applied for HSI data analysis, the curvelet transform is employed in this paper since it is able to separate image geometric details and background noise effectively. Using the support vector machine (SVM) classifier, experimental results have shown that features extracted by SSA on curvelet coefficients have better performance in terms of classification accuracies over features extracted on wavelet coefficients. Since the proposed approach mainly relies on SSA for feature extraction on the spectral dimension, it actually belongs to the spectral feature extraction category. Therefore, the proposed method has also been compared with some state-of-the-art spectral feature extraction techniques to show its efficacy. In addition, it has been proven that the proposed method is able to remove the undesirable artefacts introduced during the data acquisition process as well. By adding an extra spatial post-processing step to the classified map achieved using the proposed approach, we have shown that the classification performance is comparable with several recent spectral-spatial classification methods

    Analysis and Denoising of Hyperspectral Remote Sensing Image in the Curvelet Domain

    Get PDF
    A new denoising algorithm is proposed according to the characteristics of hyperspectral remote sensing image (HRSI) in the curvelet domain. Firstly, each band of HRSI is transformed into the curvelet domain, and the sets of subband images are obtained from different wavelength of HRSI. And then the detail subband images in the same scale and same direction from different wavelengths of HRSI are stacked to obtain new 3-D datacubes of the curvelet domain. Again, the characteristics analysis of these 3-D datacubes is performed. The analysis result shows that each new 3-D datacube has the strong spectral correlation. At last, due to the strong spectral correlation of new 3-D datacubes, the multiple linear regression is introduced to deal with these new 3-D datacubes in the curvelet domain. The simulated and the real data experiments are performed. The simulated data experimental results show that the proposed algorithm is superior to the compared algorithms in the references in terms of SNR. Furthermore, MSE and MSSIM in each band are utilized to show that the proposed algorithm is superior. The real data experimental results show that the proposed algorithm effectively removes the common spotty noise and the strip noise and simultaneously maintains more fine features during the denoising process

    Image Processing Using Sensor Noise and Human Visual System Models

    Full text link
    Because digital images are subject to noise in the device that captured them and the human visual system (HVS) that observes them, it is important to consider accurate models for noise and the HVS in the design of image processing methods. In this thesis, CMOS image sensor noise is characterized, the chromatic adaptation theories are reviewed, and new image processing algorithms that address these noise and HVS models are presented. First, a method for removing additive, multiplicative, and mixed noise from an image is developed. An image patch from an ideal image is modeled as a linear combination of image patches from the noisy image. This image model is fit to the image data in the total least square (TLS) sense, because it allows uncertainties in the measured data. The image quality of the output image demonstrates the effectiveness of the TLS algorithms and improvement over existing methods. Second, we develop a novel technique to combine demosaicing and denoising procedures systematically into a single operation. We first design a filter as optimally estimating a pixel value from a noisy single-color image. With additional constraints, we show that the same filter coefficients are appropriate for demosaicing noisy sensor data. The proposed technique can combine many existing denoising algorithms with the demosaicing operation. The algorithm is tested with pseudo-random noise and noisy raw sensor data from a real digital camera, and the proposed method suppresses CMOS image sensor noise while effectively interpolating the missing pixel components better than when treating demosaicing and denoising problems independently. Third, the problem of adjusting the color to match the digital camera output with the scene observed by the photographer?s eye is called white-balance. While most existing white-balance algorithms combine the von Kries coefficient law and an illuminant estimation techniques, the coefficient law has been shown to be an inaccurate model. We instead formulate the problem using induced opponent response theory, the solution to which reduces to a single matrix multiplication. The experimental results verify that this approach yields more natural images than traditional methods. The computational cost of the proposed method is virtually zero.Texas Instruments, Agilent Technologies, Center for Electronic Imaging System

    Function-valued Mappings and SSIM-based Optimization in Imaging

    Get PDF
    In a few words, this thesis is concerned with two alternative approaches to imag- ing, namely, Function-valued Mappings (FVMs) and Structural Similarity Index Measure (SSIM)-based Optimization. Briefly, a FVM is a mathematical object that assigns to each element in its domain a function that belongs to a given function space. The advantage of this representation is that the infinite dimensionality of the range of FVMs allows us to give a more accurate description of complex datasets such as hyperspectral images and diffusion magnetic resonance images, something that can not be done with the classical representation of such data sets as vector-valued functions. For instance, a hyperspectral image can be described as a FVM that assigns to each point in a spatial domain a spectral function that belongs to the function space L2(R); that is, the space of functions whose energy is finite. Moreoever, we present a Fourier transform and a new class of fractal transforms for FVMs to analyze and process hyperspectral images. Regarding SSIM-based optimization, we introduce a general framework for solving op- timization problems that involve the SSIM as a fidelity measure. This framework offers the option of carrying out SSIM-based imaging tasks which are usually addressed using the classical Euclidean-based methods. In the literature, SSIM-based approaches have been proposed to address the limitations of Euclidean-based metrics as measures of vi- sual quality. These methods show better performance when compared to their Euclidean counterparts since the SSIM is a better model of the human visual system; however, these approaches tend to be developed for particular applications. With the general framework that it is presented in this thesis, rather than focusing on particular imaging tasks, we introduce a set of novel algorithms capable of carrying out a wide range of SSIM-based imaging applications. Moreover, such a framework allows us to include the SSIM as a fidelity term in optimization problems in which it had not been included before

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU

    Multiresolution models in image restoration and reconstruction with medical and other applications

    Get PDF

    Weak Gravitational Lensing by Large-Scale Structures:A Tool for Constraining Cosmology

    Get PDF
    There is now very strong evidence that our Universe is undergoing an accelerated expansion period as if it were under the influence of a gravitationally repulsive “dark energy” component. Furthermore, most of the mass of the Universe seems to be in the form of non-luminous matter, the so-called “dark matter”. Together, these “dark” components, whose nature remains unknown today, represent around 96 % of the matter-energy budget of the Universe. Unraveling the true nature of the dark energy and dark matter has thus, obviously, become one of the primary goals of present-day cosmology. Weak gravitational lensing, or weak lensing for short, is the effect whereby light emitted by distant galaxies is slightly deflected by the tidal gravitational fields of intervening foreground structures. Because it only relies on the physics of gravity, weak lensing has the unique ability to probe the distribution of mass in a direct and unbiased way. This technique is at present routinely used to study the dark matter, typical applications being the mass reconstruction of galaxy clusters and the study of the properties of dark halos surrounding galaxies. Another and more recent application of weak lensing, on which we focus in this thesis, is the analysis of the cosmological lensing signal induced by large-scale structures, the so-called “cosmic shear”. This signal can be used to measure the growth of structures and the expansion history of the Universe, which makes it particularly relevant to the study of dark energy. Of all weak lensing effects, the cosmic shear is the most subtle and its detection requires the accurate analysis of the shapes of millions of distant, faint galaxies in the near infrared. So far, the main factor limiting cosmic shear measurement accuracy has been the relatively small sky areas covered. Next-generation of wide-field, multicolor surveys will, however, overcome this hurdle by covering a much larger portion of the sky with improved image quality. The resulting statistical errors will then become subdominant compared to systematic errors, the latter becoming instead the main source of uncertainty. In fact, uncovering key properties of dark energy will only be achievable if these systematics are well understood and reduced to the required level. The major sources of uncertainty resides in the shape measurement algorithm used, the convolution of the original image by the instrumental and possibly atmospheric point spread function (PSF), the pixelation effect caused by the integration of light falling on the detector pixels and the degradation caused by various sources of noise. Measuring the Cosmic shear thus entails solving the difficult inverse problem of recovering the shear signal from blurred, pixelated and noisy galaxy images while keeping errors within the limits demanded by future weak lensing surveys. Reaching this goal is not without challenges. In fact, the best available shear measurement methods would need a tenfold improvement in accuracy to match the requirements of a space mission like Euclid from ESA, scheduled at the end of this decade. Significant progress has nevertheless been made in the last few years, with substantial contributions from initiatives such as GREAT (GRavitational lEnsing Accuracy Testing) challenges. The main objective of these open competitions is to foster the development of new and more accurate shear measurement methods. We start this work with a quick overview of modern cosmology: its fundamental tenets, achievements and the challenges it faces today. We then review the theory of weak gravitational lensing and explains how it can make use of cosmic shear observations to place constraints on cosmology. The last part of this thesis focuses on the practical challenges associated with the accurate measurement of the cosmic shear. After a review of the subject we present the main contributions we have brought in this area: the development of the gfit shear measurement method, new algorithms for point spread function (PSF) interpolation and image denoising. The gfit method emerged as one of the top performers in the GREAT10 Galaxy Challenge. It essentially consists in fitting two-dimensional elliptical Sérsic light profiles to observed galaxy image in order to produce estimates for the shear power spectrum. PSF correction is automatic and an efficient shape-preserving denoising algorithm can be optionally applied prior to fitting the data. PSF interpolation is also an important issue in shear measurement because the PSF is only known at star positions while PSF correction has to be performed at any position on the sky. We have developed innovative PSF interpolation algorithms on the occasion of the GREAT10 Star Challenge, a competition dedicated to the PSF interpolation problem. Our participation was very successful since one of our interpolation method won the Star Challenge while the remaining four achieved the next highest scores of the competition. Finally we have participated in the development of a wavelet-based, shape-preserving denoising method particularly well suited to weak lensing analysis
    corecore