1,153 research outputs found

    MDL Denoising Revisited

    Full text link
    We refine and extend an earlier MDL denoising criterion for wavelet-based denoising. We start by showing that the denoising problem can be reformulated as a clustering problem, where the goal is to obtain separate clusters for informative and non-informative wavelet coefficients, respectively. This suggests two refinements, adding a code-length for the model index, and extending the model in order to account for subband-dependent coefficient distributions. A third refinement is derivation of soft thresholding inspired by predictive universal coding with weighted mixtures. We propose a practical method incorporating all three refinements, which is shown to achieve good performance and robustness in denoising both artificial and natural signals.Comment: Submitted to IEEE Transactions on Information Theory, June 200

    Weighted Thresholding and Nonlinear Approximation

    Full text link
    We present a new method for performing nonlinear approximation with redundant dictionaries. The method constructs an m−m-term approximation of the signal by thresholding with respect to a weighted version of its canonical expansion coefficients, thereby accounting for dependency between the coefficients. The main result is an associated strong Jackson embedding, which provides an upper bound on the corresponding reconstruction error. To complement the theoretical results, we compare the proposed method to the pure greedy method and the Windowed-Group Lasso by denoising music signals with elements from a Gabor dictionary.Comment: 22 pages, 3 figure

    Adaptive Image Denoising by Targeted Databases

    Full text link
    We propose a data-dependent denoising procedure to restore noisy images. Different from existing denoising algorithms which search for patches from either the noisy image or a generic database, the new algorithm finds patches from a database that contains only relevant patches. We formulate the denoising problem as an optimal filter design problem and make two contributions. First, we determine the basis function of the denoising filter by solving a group sparsity minimization problem. The optimization formulation generalizes existing denoising algorithms and offers systematic analysis of the performance. Improvement methods are proposed to enhance the patch search process. Second, we determine the spectral coefficients of the denoising filter by considering a localized Bayesian prior. The localized prior leverages the similarity of the targeted database, alleviates the intensive Bayesian computation, and links the new method to the classical linear minimum mean squared error estimation. We demonstrate applications of the proposed method in a variety of scenarios, including text images, multiview images and face images. Experimental results show the superiority of the new algorithm over existing methods.Comment: 15 pages, 13 figures, 2 tables, journa

    Improvement of BM3D Algorithm and Employment to Satellite and CFA Images Denoising

    Full text link
    This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.Comment: 11 pages, 7 figur

    SHAH: SHape-Adaptive Haar wavelets for image processing

    Get PDF
    We propose the SHAH (SHape-Adaptive Haar) transform for images, which results in an orthonormal, adaptive decomposition of the image into Haar-wavelet-like components, arranged hierarchically according to decreasing importance, whose shapes reflect the features present in the image. The decomposition is as sparse as it can be for piecewise-constant images. It is performed via an stepwise bottom-up algorithm with quadratic computational complexity; however, nearly-linear variants also exist. SHAH is rapidly invertible. We show how to use SHAH for image denoising. Having performed the SHAH transform, the coefficients are hard- or soft-thresholded, and the inverse transform taken. The SHAH image denoising algorithm compares favourably to the state of the art for piecewise-constant images. A clear asset of the methodology is its very general scope: it can be used with any images or more generally with any data that can be represented as graphs or networks

    Compressive Imaging via Approximate Message Passing with Image Denoising

    Full text link
    We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reconstruct the original input signal well, a good denoiser must be used. We apply two wavelet based image denoisers within AMP. The first denoiser is the "amplitude-scaleinvariant Bayes estimator" (ABE), and the second is an adaptive Wiener filter; we call our AMP based algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results show that both AMP-ABE and AMP-Wiener significantly improve over the state of the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener offers lower mean square error (MSE) than existing compressive imaging algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise as well as the adaptive Wiener filter.Comment: 15 pages; 2 tables; 7 figures; to appear in IEEE Trans. Signal Proces
    • …
    corecore