1 research outputs found
Sparse Bayesian learning with uncertainty models and multiple dictionaries
Sparse Bayesian learning (SBL) has emerged as a fast and competitive method
to perform sparse processing. The SBL algorithm, which is developed using a
Bayesian framework, approximately solves a non-convex optimization problem
using fixed point updates. It provides comparable performance and is
significantly faster than convex optimization techniques used in sparse
processing. We propose a signal model which accounts for dictionary mismatch
and the presence of errors in the weight vector at low signal-to-noise ratios.
A fixed point update equation is derived which incorporates the statistics of
mismatch and weight errors. We also process observations from multiple
dictionaries. Noise variances are estimated using stochastic maximum
likelihood. The derived update equations are studied quantitatively using
beamforming simulations applied to direction-of-arrival (DoA). Performance of
SBL using single- and multi-frequency observations, and in the presence of
aliasing, is evaluated. SwellEx-96 experimental data demonstrates qualitatively
the advantages of SBL.Comment: 11 pages, 8 figure