2 research outputs found

    Regularization Methods Based on the LqL_q-Likelihood for Linear Models with Heavy-Tailed Errors

    Full text link
    We propose regularization methods for linear models based on the LqL_q-likelihood, which is a generalization of the log-likelihood using a power function. Some heavy-tailed distributions are known as qq-normal distributions. We find that the proposed methods for linear models with qq-normal errors coincide with the regularization methods that are applied to the normal linear model. The proposed methods work well and efficiently, and can be computed using existing packages. We examine the proposed methods using numerical experiments, showing that the methods perform well, even when the error is heavy-tailed

    Alpha-stable low-rank plus residual decomposition for speech enhancement

    Get PDF
    International audienceIn this study, we propose a novel probabilistic model for separating clean speech signals from noisy mixtures by decomposing the mixture spectrograms into a structured speech part and a more flexible residual part. The main novelty in our model is that it uses a family of heavy-tailed distributions, so called the α-stable distributions, for modeling the residual signal. We develop an expectation-maximization algorithm for parameter estimation and a Monte Carlo scheme for posterior estimation of the clean speech. Our experiments show that the proposed method outperforms relevant factorization-based algorithms by a significant margin
    corecore