25 research outputs found

    Compactly Supported Tensor Product Complex Tight Framelets with Directionality

    Full text link
    Although tensor product real-valued wavelets have been successfully applied to many high-dimensional problems, they can only capture well edge singularities along the coordinate axis directions. As an alternative and improvement of tensor product real-valued wavelets and dual tree complex wavelet transform, recently tensor product complex tight framelets with increasing directionality have been introduced in [8] and applied to image denoising in [13]. Despite several desirable properties, the directional tensor product complex tight framelets constructed in [8,13] are bandlimited and do not have compact support in the space/time domain. Since compactly supported wavelets and framelets are of great interest and importance in both theory and application, it remains as an unsolved problem whether there exist compactly supported tensor product complex tight framelets with directionality. In this paper, we shall satisfactorily answer this question by proving a theoretical result on directionality of tight framelets and by introducing an algorithm to construct compactly supported complex tight framelets with directionality. Our examples show that compactly supported complex tight framelets with directionality can be easily derived from any given eligible low-pass filters and refinable functions. Several examples of compactly supported tensor product complex tight framelets with directionality have been presented

    Coupling BM3D with directional wavelet packets for image denoising

    Full text link
    The paper presents an image denoising algorithm by combining a method that is based on directional quasi-analytic wavelet packets (qWPs) with the popular BM3D algorithm. The qWPs and its corresponding transforms are designed in [1]. The denoising algorithm qWP (qWPdn) applies an adaptive localized soft thresholding to the transform coefficients using the Bivariate Shrinkage methodology. The combined method consists of several iterations of qWPdn and BM3D algorithms, where the output from one algorithm updates the input to the other (cross-boosting).The qWPdn and BM3D methods complement each other. The qWPdn capabilities to capture edges and fine texture patterns are coupled with utilizing the sparsity in real images and self-similarity of patches in the image that is inherent in the BM3D. The obtained results are quite competitive with the best state-of-the-art algorithms. We compare the performance of the combined methodology with the performances of cptTP-CTF6, DAS-2 algorithms, which use directional frames, and the BM3D algorithm. In the overwhelming majority of the experiments, the combined algorithm outperformed the above methods.Comment: 26 pages. arXiv admin note: substantial text overlap with arXiv:2001.04899, arXiv:1907.01479; text overlap with arXiv:2008.0536
    corecore