112 research outputs found

    Power-Weighted Divergences for Relative Attenuation and Delay Estimation

    Get PDF
    Power-weighted estimators have recently been proposed for relative attenuation and delay estimation in blind source separation. Their provenance lies in the observation that speech is approximately windowed-disjoint orthogonal (WDO) in the time-frequency (TF) domain; it has been reported that using WDO, derived from TF representations of speech, improves mixing parameter estimation. We show that power-weighted relative attenuation and delay estimators can be derived from a particular case of a weighted Bregman divergence. We then propose a wider class of estimators, which we tune to give better parameter estimates for speech

    The Minimum S-Divergence Estimator under Continuous Models: The Basu-Lindsay Approach

    Full text link
    Robust inference based on the minimization of statistical divergences has proved to be a useful alternative to the classical maximum likelihood based techniques. Recently Ghosh et al. (2013) proposed a general class of divergence measures for robust statistical inference, named the S-Divergence Family. Ghosh (2014) discussed its asymptotic properties for the discrete model of densities. In the present paper, we develop the asymptotic properties of the proposed minimum S-Divergence estimators under continuous models. Here we use the Basu-Lindsay approach (1994) of smoothing the model densities that, unlike previous approaches, avoids much of the complications of the kernel bandwidth selection. Illustrations are presented to support the performance of the resulting estimators both in terms of efficiency and robustness through extensive simulation studies and real data examples.Comment: Pre-Print, 34 page

    Learning the Proximity Operator in Unfolded ADMM for Phase Retrieval

    Full text link
    This paper considers the phase retrieval (PR) problem, which aims to reconstruct a signal from phaseless measurements such as magnitude or power spectrograms. PR is generally handled as a minimization problem involving a quadratic loss. Recent works have considered alternative discrepancy measures, such as the Bregman divergences, but it is still challenging to tailor the optimal loss for a given setting. In this paper we propose a novel strategy to automatically learn the optimal metric for PR. We unfold a recently introduced ADMM algorithm into a neural network, and we emphasize that the information about the loss used to formulate the PR problem is conveyed by the proximity operator involved in the ADMM updates. Therefore, we replace this proximity operator with trainable activation functions: learning these in a supervised setting is then equivalent to learning an optimal metric for PR. Experiments conducted with speech signals show that our approach outperforms the baseline ADMM, using a light and interpretable neural architecture.Comment: 10 pages, 5 figures, submitted to IEEE SP
    • …
    corecore