1 research outputs found
Hyperparameter Estimation for Sparse Bayesian Learning Models
Sparse Bayesian Learning (SBL) models are extensively used in signal
processing and machine learning for promoting sparsity through hierarchical
priors. The hyperparameters in SBL models are crucial for the model's
performance, but they are often difficult to estimate due to the non-convexity
and the high-dimensionality of the associated objective function. This paper
presents a comprehensive framework for hyperparameter estimation in SBL models,
encompassing well-known algorithms such as the expectation-maximization (EM),
MacKay, and convex bounding (CB) algorithms. These algorithms are cohesively
interpreted within an alternating minimization and linearization (AML)
paradigm, distinguished by their unique linearized surrogate functions.
Additionally, a novel algorithm within the AML framework is introduced, showing
enhanced efficiency, especially under low signal noise ratios. This is further
improved by a new alternating minimization and quadratic approximation (AMQ)
paradigm, which includes a proximal regularization term. The paper
substantiates these advancements with thorough convergence analysis and
numerical experiments, demonstrating the algorithm's effectiveness in various
noise conditions and signal-to-noise ratios