149 research outputs found

    Improvement of BM3D Algorithm and Employment to Satellite and CFA Images Denoising

    Full text link
    This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.Comment: 11 pages, 7 figur

    Content adaptive wavelet based method for joint denoising of depth and luminance images

    Get PDF
    In this paper we present a new method for joint denoising of depth and luminance images produced by time-of-flight camera. Here we assume that the sequence does not contain outlier points which can be present in the depth images. Our method first performs estimation of noise and signal covariance matrices and then performs vector denoising. Luminance image is segmented into similar contexts usina k-means algorithm, which are used for calculation of covariance matrices. Denoising results are compared with the ground truth images obtained by averaging of the multiple frames of the still scene

    Poisson noise removal in multivariate count data

    Get PDF
    International audienceThe Multi-scale Variance Stabilization Transform (MSVST) has recently been proposed for 2D Poisson data denoising.1 In this work, we present an extension of the MSVST with the wavelet transform to multivariate data-each pixel is vector-valued-, where the vector field dimension may be the wavelength, the energy, or the time. Such data can be viewed naively as 3D data where the third dimension may be time, wavelength or energy (e.g. hyperspectral imaging). But this naive analysis using a 3D MSVST would be awkward as the data dimensions have different physical meanings. A more appropriate approach would be to use a wavelet transform, where the time or energy scale is not connected to the spatial scale. We show that our multivalued extension of MSVST can be used advantageously for approximately Gaussianizing and stabilizing the variance of a sequence of independent Poisson random vectors. This approach is shown to be fast and very well adapted to extremely low-count situations. We use a hypothesis testing framework in the wavelet domain to denoise the Gaussianized and stabilized coefficients, and then apply an iterative reconstruction algorithm to recover the estimated vector field of intensities underlying the Poisson data. Our approach is illustrated for the detection and characterization of astrophysical sources of high-energy gamma rays, using realistic simulated observations. We show that the multivariate MSVST permits efficient estimation across the time/energy dimension and immediate recovery of spectral properties

    Wavelets and Field Forecast Verification

    Get PDF

    CYCLOP: A stereo color image quality assessment metric

    Full text link
    International audienceIn this work, a reduced reference (RR) perceptual quality metric for color stereoscopic images is presented. Given a reference stereo pair of images and their "distorted" version, we first compute the disparity map of both the reference and the distorted stereoscopic images. To this end, we define a method for color image disparity estimation based on the structure tensors properties and eigenvalues/eigenvectors analysis. Then, we compute the cyclopean images of both the reference and the distorted pairs. Thereafter, we apply a multispectral wavelet decomposition to the two cyclopean color images in order to describe the different channels in the human visual system (HVS). Then, contrast sensitivity function (CSF) filtering is performed to obtain the same visual sensitivity information within the original and the distorted cyclopean images. Thereafter, based on the properties of the human visual system (HVS), rational sensitivity thresholding is performed to obtain the sensitivity coefficients of the cyclopean images. Finally, RR stereo color image quality assessment (SCIQA) is performed by comparing the sensitivity coefficients of the cyclopean images and studying the coherence between the disparity maps of the reference and the distorted pairs. Experiments performed on color stereoscopic images indicate that the objective scores obtained by the proposed metric agree well with the subjective assessment scores

    Joint Total Variation ESTATICS for Robust Multi-Parameter Mapping

    Get PDF
    Quantitative magnetic resonance imaging (qMRI) derives tissue-specific parameters -- such as the apparent transverse relaxation rate R2*, the longitudinal relaxation rate R1 and the magnetisation transfer saturation -- that can be compared across sites and scanners and carry important information about the underlying microstructure. The multi-parameter mapping (MPM) protocol takes advantage of multi-echo acquisitions with variable flip angles to extract these parameters in a clinically acceptable scan time. In this context, ESTATICS performs a joint loglinear fit of multiple echo series to extract R2* and multiple extrapolated intercepts, thereby improving robustness to motion and decreasing the variance of the estimators. In this paper, we extend this model in two ways: (1) by introducing a joint total variation (JTV) prior on the intercepts and decay, and (2) by deriving a nonlinear maximum \emph{a posteriori} estimate. We evaluated the proposed algorithm by predicting left-out echoes in a rich single-subject dataset. In this validation, we outperformed other state-of-the-art methods and additionally showed that the proposed approach greatly reduces the variance of the estimated maps, without introducing bias.Comment: 11 pages, 2 figures, 1 table, conference paper, accepted at MICCAI 202

    Blind Source Separation: the Sparsity Revolution

    Get PDF
    International audienceOver the last few years, the development of multi-channel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the wide literature on the so-called blind source separation (BSS) problem. In this context, as clearly emphasized by previous work, it is fundamental that the sources to be retrieved present some quantitatively measurable diversity. Recently, sparsity and morphological diversity have emerged as a novel and effective source of diversity for BSS. We give here some essential insights into the use of sparsity in source separation and we outline the essential role of morphological diversity as being a source of diversity or contrast between the sources. This paper overviews a sparsity-based BSS method coined Generalized Morphological Component Analysis (GMCA) that takes advantages of both morphological diversity and sparsity, using recent sparse overcomplete or redundant signal representations. GMCA is a fast and efficient blind source separation method. In remote sensing applications, the specificity of hyperspectral data should be accounted for. We extend the proposed GMCA framework to deal with hyperspectral data. In a general framework, GMCA provides a basis for multivariate data analysis in the scope of a wide range of classical multivariate data restorate. Numerical results are given in color image denoising and inpainting. Finally, GMCA is applied to the simulated ESA/Planck data. It is shown to give effective astrophysical component separation

    Convergence rates and source conditions for Tikhonov regularization with sparsity constraints

    Full text link
    This paper addresses the regularization by sparsity constraints by means of weighted ℓp\ell^p penalties for 0≤p≤20\leq p\leq 2. For 1≤p≤21\leq p\leq 2 special attention is payed to convergence rates in norm and to source conditions. As main result it is proven that one gets a convergence rate in norm of δ\sqrt{\delta} for 1≤p≤21\leq p\leq 2 as soon as the unknown solution is sparse. The case p=1p=1 needs a special technique where not only Bregman distances but also a so-called Bregman-Taylor distance has to be employed. For p<1p<1 only preliminary results are shown. These results indicate that, different from p≥1p\geq 1, the regularizing properties depend on the interplay of the operator and the basis of sparsity. A counterexample for p=0p=0 shows that regularization need not to happen

    A New Denoising System for SONAR Images

    Get PDF
    • …
    corecore