12 research outputs found
Sparse Bayesian Learning with Diagonal Quasi-Newton Method for Large Scale Classification
Sparse Bayesian Learning (SBL) constructs an extremely sparse probabilistic
model with very competitive generalization. However, SBL needs to invert a big
covariance matrix with complexity O(M^3 ) (M: feature size) for updating the
regularization priors, making it difficult for practical use. There are three
issues in SBL: 1) Inverting the covariance matrix may obtain singular solutions
in some cases, which hinders SBL from convergence; 2) Poor scalability to
problems with high dimensional feature space or large data size; 3) SBL easily
suffers from memory overflow for large-scale data. This paper addresses these
issues with a newly proposed diagonal Quasi-Newton (DQN) method for SBL called
DQN-SBL where the inversion of big covariance matrix is ignored so that the
complexity and memory storage are reduced to O(M). The DQN-SBL is thoroughly
evaluated on non-linear classifiers and linear feature selection using various
benchmark datasets of different sizes. Experimental results verify that DQN-SBL
receives competitive generalization with a very sparse model and scales well to
large-scale problems.Comment: 11 pages,5 figure
Vector Approximate Message Passing based Channel Estimation for MIMO-OFDM Underwater Acoustic Communications
Accurate channel estimation is critical to the performance of orthogonal
frequency-division multiplexing (OFDM) underwater acoustic (UWA)
communications, especially under multiple-input multiple-output (MIMO)
scenarios. In this paper, we explore Vector Approximate Message Passing (VAMP)
coupled with Expected Maximum (EM) to obtain channel estimation (CE) for MIMO
OFDM UWA communications. The EM-VAMP-CE scheme is developed by employing a
Bernoulli-Gaussian (BG) prior distribution for the channel impulse response,
and hyperparameters of the BG prior distribution are learned via the EM
algorithm. Performance of the EM-VAMP-CE is evaluated through both synthesized
data and real data collected in two at-sea UWA communication experiments. It is
shown the EM-VAMP-CE achieves better performance-complexity tradeoff compared
with existing channel estimation methods.Comment: Journal:IEEE Journal of Oceanic Engineering(Date of
Submission:2022-06-25
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
In this paper, we develop a Bayesian evidence maximization framework to solve
the sparse non-negative least squares (S-NNLS) problem. We introduce a family
of probability densities referred to as the Rectified Gaussian Scale Mixture
(R- GSM) to model the sparsity enforcing prior distribution for the solution.
The R-GSM prior encompasses a variety of heavy-tailed densities such as the
rectified Laplacian and rectified Student- t distributions with a proper choice
of the mixing density. We utilize the hierarchical representation induced by
the R-GSM prior and develop an evidence maximization framework based on the
Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate
the hyper-parameters and obtain a point estimate for the solution. We refer to
the proposed method as rectified sparse Bayesian learning (R-SBL). We provide
four R- SBL variants that offer a range of options for computational complexity
and the quality of the E-step computation. These methods include the Markov
chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate
message passing and a diagonal approximation. Using numerical experiments, we
show that the proposed R-SBL method outperforms existing S-NNLS solvers in
terms of both signal and support recovery performance, and is also very robust
against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
High-dimensional macroeconomic forecasting using message passing algorithms
This paper proposes two distinct contributions to econometric analysis of
large information sets and structural instabilities. First, it treats a
regression model with time-varying coefficients, stochastic volatility and
exogenous predictors, as an equivalent high-dimensional static regression
problem with thousands of covariates. Inference in this specification proceeds
using Bayesian hierarchical priors that shrink the high-dimensional vector of
coefficients either towards zero or time-invariance. Second, it introduces the
frameworks of factor graphs and message passing as a means of designing
efficient Bayesian estimation algorithms. In particular, a Generalized
Approximate Message Passing (GAMP) algorithm is derived that has low
algorithmic complexity and is trivially parallelizable. The result is a
comprehensive methodology that can be used to estimate time-varying parameter
regressions with arbitrarily large number of exogenous predictors. In a
forecasting exercise for U.S. price inflation this methodology is shown to work
very well.Comment: 89 pages; to appear in Journal of Business and Economic Statistic