42,674 research outputs found

    Image resolution enhancement using dual-tree complex wavelet transform

    Get PDF
    In this letter, a complex wavelet-domain image resolution enhancement algorithm based on the estimation of wavelet coefficients is proposed. The method uses a forward and inverse dual-tree complex wavelet transform (DT-CWT) to construct a high-resolution (HR) image from the given low-resolution (LR) image. The HR image is reconstructed from the LR image, together with a set of wavelet coefficients, using the inverse DT-CWT. The set of wavelet coefficients is estimated from the DT-CWT decomposition of the rough estimation of the HR image. Results are presented and discussed on very HR QuickBird data, through comparisons between state-of-the-art resolution enhancement methods

    Wavelet Based Semi-blind Channel Estimation For Multiband OFDM

    Full text link
    This paper introduces an expectation-maximization (EM) algorithm within a wavelet domain Bayesian framework for semi-blind channel estimation of multiband OFDM based UWB communications. A prior distribution is chosen for the wavelet coefficients of the unknown channel impulse response in order to model a sparseness property of the wavelet representation. This prior yields, in maximum a posteriori estimation, a thresholding rule within the EM algorithm. We particularly focus on reducing the number of estimated parameters by iteratively discarding ``unsignificant'' wavelet coefficients from the estimation process. Simulation results using UWB channels issued from both models and measurements show that under sparsity conditions, the proposed algorithm outperforms pilot based channel estimation in terms of mean square error and bit error rate and enhances the estimation accuracy with less computational complexity than traditional semi-blind methods

    Hyperanalytic denoising

    Get PDF
    A new threshold rule for the estimation of a deterministic image immersed in noise is proposed. The full estimation procedure is based on a separable wavelet decomposition of the observed image, and the estimation is improved by introducing the new threshold to estimate the decomposition coefficients. The observed wavelet coefficients are thresholded, using the magnitudes of wavelet transforms of a small number of "replicates" of the image. The "replicates" are calculated by extending the image into a vector-valued hyperanalytic signal. More than one hyperanalytic signal may be chosen, and either the hypercomplex or Riesz transforms are used, to calculate this object. The deterministic and stochastic properties of the observed wavelet coefficients of the hyperanalytic signal, at a fixed scale and position index, are determined. A "universal" threshold is calculated for the proposed procedure. An expression for the risk of an individual coefficient is derived. The risk is calculated explicitly when the "universal" threshold is used and is shown to be less than the risk of "universal" hard thresholding, under certain conditions. The proposed method is implemented and the derived theoretical risk reductions substantiated

    Wavelet Domain Image Separation

    Full text link
    In this paper, we consider the problem of blind signal and image separation using a sparse representation of the images in the wavelet domain. We consider the problem in a Bayesian estimation framework using the fact that the distribution of the wavelet coefficients of real world images can naturally be modeled by an exponential power probability density function. The Bayesian approach which has been used with success in blind source separation gives also the possibility of including any prior information we may have on the mixing matrix elements as well as on the hyperparameters (parameters of the prior laws of the noise and the sources). We consider two cases: first the case where the wavelet coefficients are assumed to be i.i.d. and second the case where we model the correlation between the coefficients of two adjacent scales by a first order Markov chain. This paper only reports on the first case, the second case results will be reported in a near future. The estimation computations are done via a Monte Carlo Markov Chain (MCMC) procedure. Some simulations show the performances of the proposed method. Keywords: Blind source separation, wavelets, Bayesian estimation, MCMC Hasting-Metropolis algorithm.Comment: Presented at MaxEnt2002, the 22nd International Workshop on Bayesian and Maximum Entropy methods (Aug. 3-9, 2002, Moscow, Idaho, USA). To appear in Proceedings of American Institute of Physic

    Large scale reduction principle and application to hypothesis testing

    Full text link
    Consider a non--linear function G(Xt)G(X_t) where XtX_t is a stationary Gaussian sequence with long--range dependence. The usual reduction principle states that the partial sums of G(Xt)G(X_t) behave asymptotically like the partial sums of the first term in the expansion of GG in Hermite polynomials. In the context of the wavelet estimation of the long--range dependence parameter, one replaces the partial sums of G(Xt)G(X_t) by the wavelet scalogram, namely the partial sum of squares of the wavelet coefficients. Is there a reduction principle in the wavelet setting, namely is the asymptotic behavior of the scalogram for G(Xt)G(X_t) the same as that for the first term in the expansion of GG in Hermite polynomial? The answer is negative in general. This paper provides a minimal growth condition on the scales of the wavelet coefficients which ensures that the reduction principle also holds for the scalogram. The results are applied to testing the hypothesis that the long-range dependence parameter takes a specific value

    Nonparametric estimation over shrinking neighborhoods: Superefficiency and adaptation

    Get PDF
    A theory of superefficiency and adaptation is developed under flexible performance measures which give a multiresolution view of risk and bridge the gap between pointwise and global estimation. This theory provides a useful benchmark for the evaluation of spatially adaptive estimators and shows that the possible degree of superefficiency for minimax rate optimal estimators critically depends on the size of the neighborhood over which the risk is measured. Wavelet procedures are given which adapt rate optimally for given shrinking neighborhoods including the extreme cases of mean squared error at a point and mean integrated squared error over the whole interval. These adaptive procedures are based on a new wavelet block thresholding scheme which combines both the commonly used horizontal blocking of wavelet coefficients (at the same resolution level) and vertical blocking of coefficients (across different resolution levels).Comment: Published at http://dx.doi.org/10.1214/009053604000000832 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Robust Estimation and Wavelet Thresholding in Partial Linear Models

    Full text link
    This paper is concerned with a semiparametric partially linear regression model with unknown regression coefficients, an unknown nonparametric function for the non-linear component, and unobservable Gaussian distributed random errors. We present a wavelet thresholding based estimation procedure to estimate the components of the partial linear model by establishing a connection between an l1l_1-penalty based wavelet estimator of the nonparametric component and Huber's M-estimation of a standard linear model with outliers. Some general results on the large sample properties of the estimates of both the parametric and the nonparametric part of the model are established. Simulations and a real example are used to illustrate the general results and to compare the proposed methodology with other methods available in the recent literature

    Improved thresholding and quantization techniques for image compression

    Get PDF
    In recent decades, digital images have become increasingly important. With many modern applications use image graphics extensively, it tends to burden both the storage and transmission process. Despite the technological advances in storage and transmission, the demands placed on storage and bandwidth capacities still exceeded its availability. Moreover, the compression process involves eliminating some data that degrades the image quality. Therefore, to overcome this problem, an improved thresholding and quantization techniques for image compression is proposed. Firstly, the generated wavelet coefficients obtained from the Discrete Wavelet Transform (DWT) process are thresholded by the proposed Standard Deviation-Based Wavelet Coefficients Threshold Estimation Algorithm. The proposed algorithm estimates the best threshold value at each detail subbands. This algorithm exploits the huge number of near-zero coefficients exist in detail subbands. For different images, the distribution of wavelet coefficients at each subband are substantially different. So, by calculating the standard deviation value of each subband, a better threshold value can be obtained. Next, the retained wavelet coefficients are subjected to the next proposed Minimizing Median Quantization Error Algorithm. The proposed algorithm utilizes the high occurrence of zero coefficient obtained by the previous thresholding process by re-allocating the zero and non-zero coefficients in different groups for quantization. Then, quantization error minimization mechanism is employed by calculating the median quantization error at each quantization interval class. The results are then compared to the existing algorithms and it is found that the proposed compression algorithm shows double increase in compression ratio performance, produces higher image quality with PSNR value above 40dB and ensures a better bit saving with smooth control at bit rate higher than 4 bpp. Thus, the proposed algorithm provides an alternative technique to compress the digital image
    corecore