16 research outputs found

    Identification of nonlinear sparse networks using sparse Bayesian learning

    Get PDF
    © 2017 IEEE. This paper considers a parametric approach to infer sparse networks described by nonlinear ARX models, with linear ARX treated as a special case. The proposed method infers both the Boolean structure and the internal dynamics of the network. It considers classes of nonlinear systems that can be written as weighted (unknown) sums of nonlinear functions chosen from a fixed basis dictionary. Due to the sparse topology, coefficients of most groups are zero. Besides, only a few nonlinear terms in nonzero groups contribute to the internal dynamics. Therefore, the identification problem should estimate both group-and element-sparse parameter vectors. The proposed method combines Sparse Bayesian Learning (SBL) and Group Sparse Bayesian Learning (GSBL) to impose both kinds of sparsity. Simulations indicate that our method outperforms SBL and GSBL when these are applied alone. A linear ring structure network also illustrates that the proposed method has improved performance to the kernel approach

    Speech enhancement based on hidden Markov model using sparse code shrinkage

    Get PDF
    This paper presents a new hidden Markov model-based (HMM-based) speech enhancement framework based on the independent component analysis (ICA). We propose analytical procedures for training clean speech and noise models by the Baum re-estimation algorithm and present a Maximum a posterior (MAP) estimator based on Laplace-Gaussian (for clean speech and noise respectively) combination in the HMM framework, namely sparse code shrinkage-HMM (SCS-HMM).The proposed method on TIMIT database in the presence of three noise types at three SNR levels in terms of PESQ and SNR are evaluated and compared with Auto-Regressive HMM (AR-HMM) and speech enhancement based on HMM with discrete cosine transform (DCT) coefficients using Laplace and Gaussian distributions (LaGa-HMMDCT). The results confirm the superiority of SCS-HMM method in presence of non-stationary noises compared to LaGa-HMMDCT. The results of SCS-HMM method represent better performance of this method compared to AR-HMM in presence of white noise based on PESQ measure

    Hyperparameter Estimation for Sparse Bayesian Learning Models

    Full text link
    Sparse Bayesian Learning (SBL) models are extensively used in signal processing and machine learning for promoting sparsity through hierarchical priors. The hyperparameters in SBL models are crucial for the model's performance, but they are often difficult to estimate due to the non-convexity and the high-dimensionality of the associated objective function. This paper presents a comprehensive framework for hyperparameter estimation in SBL models, encompassing well-known algorithms such as the expectation-maximization (EM), MacKay, and convex bounding (CB) algorithms. These algorithms are cohesively interpreted within an alternating minimization and linearization (AML) paradigm, distinguished by their unique linearized surrogate functions. Additionally, a novel algorithm within the AML framework is introduced, showing enhanced efficiency, especially under low signal noise ratios. This is further improved by a new alternating minimization and quadratic approximation (AMQ) paradigm, which includes a proximal regularization term. The paper substantiates these advancements with thorough convergence analysis and numerical experiments, demonstrating the algorithm's effectiveness in various noise conditions and signal-to-noise ratios
    corecore