17,434 research outputs found
Bayesian Pursuit Algorithms
This paper addresses the sparse representation (SR) problem within a general
Bayesian framework. We show that the Lagrangian formulation of the standard SR
problem, i.e., , can
be regarded as a limit case of a general maximum a posteriori (MAP) problem
involving Bernoulli-Gaussian variables. We then propose different tractable
implementations of this MAP problem that we refer to as "Bayesian pursuit
algorithms". The Bayesian algorithms are shown to have strong connections with
several well-known pursuit algorithms of the literature (e.g., MP, OMP, StOMP,
CoSaMP, SP) and generalize them in several respects. In particular, i) they
allow for atom deselection; ii) they can include any prior information about
the probability of occurrence of each atom within the selection process; iii)
they can encompass the estimation of unkown model parameters into their
recursions
Compressive Sensing: Performance Comparison Of Sparse Recovery Algorithms
Spectrum sensing is an important process in cognitive radio. A number of
sensing techniques that have been proposed suffer from high processing time,
hardware cost and computational complexity. To address these problems,
compressive sensing has been proposed to decrease the processing time and
expedite the scanning process of the radio spectrum. Selection of a suitable
sparse recovery algorithm is necessary to achieve this goal. A number of sparse
recovery algorithms have been proposed. This paper surveys the sparse recovery
algorithms, classify them into categories, and compares their performances. For
the comparison, we used several metrics such as recovery error, recovery time,
covariance, and phase transition diagram. The results show that techniques
under Greedy category are faster, techniques of Convex and Relaxation category
perform better in term of recovery error, and Bayesian based techniques are
observed to have an advantageous balance of small recovery error and a short
recovery time.Comment: CCWC 2017 Las Vegas, US
Deep Learning: A Bayesian Perspective
Deep learning is a form of machine learning for nonlinear high dimensional
pattern matching and prediction. By taking a Bayesian probabilistic
perspective, we provide a number of insights into more efficient algorithms for
optimisation and hyper-parameter tuning. Traditional high-dimensional data
reduction techniques, such as principal component analysis (PCA), partial least
squares (PLS), reduced rank regression (RRR), projection pursuit regression
(PPR) are all shown to be shallow learners. Their deep learning counterparts
exploit multiple deep layers of data reduction which provide predictive
performance gains. Stochastic gradient descent (SGD) training optimisation and
Dropout (DO) regularization provide estimation and variable selection. Bayesian
regularization is central to finding weights and connections in networks to
optimize the predictive bias-variance trade-off. To illustrate our methodology,
we provide an analysis of international bookings on Airbnb. Finally, we
conclude with directions for future research
Bayesian Hypothesis Testing for Sparse Representation
In this paper, we propose a Bayesian Hypothesis Testing Algorithm (BHTA) for
sparse representation. It uses the Bayesian framework to determine active atoms
in sparse representation of a signal.
The Bayesian hypothesis testing based on three assumptions, determines the
active atoms from the correlations and leads to the activity measure as
proposed in Iterative Detection Estimation (IDE) algorithm. In fact, IDE uses
an arbitrary decreasing sequence of thresholds while the proposed algorithm is
based on a sequence which derived from hypothesis testing. So, Bayesian
hypothesis testing framework leads to an improved version of the IDE algorithm.
The simulations show that Hard-version of our suggested algorithm achieves
one of the best results in terms of estimation accuracy among the algorithms
which have been implemented in our simulations, while it has the greatest
complexity in terms of simulation time
Application of Compressive Sensing Techniques in Distributed Sensor Networks: A Survey
In this survey paper, our goal is to discuss recent advances of compressive
sensing (CS) based solutions in wireless sensor networks (WSNs) including the
main ongoing/recent research efforts, challenges and research trends in this
area. In WSNs, CS based techniques are well motivated by not only the sparsity
prior observed in different forms but also by the requirement of efficient
in-network processing in terms of transmit power and communication bandwidth
even with nonsparse signals. In order to apply CS in a variety of WSN
applications efficiently, there are several factors to be considered beyond the
standard CS framework. We start the discussion with a brief introduction to the
theory of CS and then describe the motivational factors behind the potential
use of CS in WSN applications. Then, we identify three main areas along which
the standard CS framework is extended so that CS can be efficiently applied to
solve a variety of problems specific to WSNs. In particular, we emphasize on
the significance of extending the CS framework to (i). take communication
constraints into account while designing projection matrices and reconstruction
algorithms for signal reconstruction in centralized as well in decentralized
settings, (ii) solve a variety of inference problems such as detection,
classification and parameter estimation, with compressed data without signal
reconstruction and (iii) take practical communication aspects such as
measurement quantization, physical layer secrecy constraints, and imperfect
channel conditions into account. Finally, open research issues and challenges
are discussed in order to provide perspectives for future research directions
Scaling Multidimensional Inference for Structured Gaussian Processes
Exact Gaussian Process (GP) regression has O(N^3) runtime for data size N,
making it intractable for large N. Many algorithms for improving GP scaling
approximate the covariance with lower rank matrices. Other work has exploited
structure inherent in particular covariance functions, including GPs with
implied Markov structure, and equispaced inputs (both enable O(N) runtime).
However, these GP advances have not been extended to the multidimensional input
setting, despite the preponderance of multidimensional applications. This paper
introduces and tests novel extensions of structured GPs to multidimensional
inputs. We present new methods for additive GPs, showing a novel connection
between the classic backfitting method and the Bayesian framework. To achieve
optimal accuracy-complexity tradeoff, we extend this model with a novel variant
of projection pursuit regression. Our primary result -- projection pursuit
Gaussian Process Regression -- shows orders of magnitude speedup while
preserving high accuracy. The natural second and third steps include
non-Gaussian observations and higher dimensional equispaced grid methods. We
introduce novel techniques to address both of these necessary directions. We
thoroughly illustrate the power of these three advances on several datasets,
achieving close performance to the naive Full GP at orders of magnitude less
cost.Comment: 14 page
Two-Dimensional Pattern-Coupled Sparse Bayesian Learning via Generalized Approximate Message Passing
We consider the problem of recovering two-dimensional (2-D) block-sparse
signals with \emph{unknown} cluster patterns. Two-dimensional block-sparse
patterns arise naturally in many practical applications such as foreground
detection and inverse synthetic aperture radar imaging. To exploit the
block-sparse structure, we introduce a 2-D pattern-coupled hierarchical
Gaussian prior model to characterize the statistical pattern dependencies among
neighboring coefficients. Unlike the conventional hierarchical Gaussian prior
model where each coefficient is associated independently with a unique
hyperparameter, the pattern-coupled prior for each coefficient not only
involves its own hyperparameter, but also its immediate neighboring
hyperparameters. Thus the sparsity patterns of neighboring coefficients are
related to each other and the hierarchical model has the potential to encourage
2-D structured-sparse solutions. An expectation-maximization (EM) strategy is
employed to obtain the maximum a posterior (MAP) estimate of the
hyperparameters, along with the posterior distribution of the sparse signal. In
addition, the generalized approximate message passing (GAMP) algorithm is
embedded into the EM framework to efficiently compute an approximation of the
posterior distribution of hidden variables, which results in a significant
reduction in computational complexity. Numerical results are provided to
illustrate the effectiveness of the proposed algorithm
Bayesian Compressive Sensing Using Normal Product Priors
In this paper, we introduce a new sparsity-promoting prior, namely, the
"normal product" prior, and develop an efficient algorithm for sparse signal
recovery under the Bayesian framework. The normal product distribution is the
distribution of a product of two normally distributed variables with zero means
and possibly different variances. Like other sparsity-encouraging distributions
such as the Student's -distribution, the normal product distribution has a
sharp peak at origin, which makes it a suitable prior to encourage sparse
solutions. A two-stage normal product-based hierarchical model is proposed. We
resort to the variational Bayesian (VB) method to perform the inference.
Simulations are conducted to illustrate the effectiveness of our proposed
algorithm as compared with other state-of-the-art compressed sensing
algorithms
Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset
Recent research on problem formulations based on decomposition into low-rank
plus sparse matrices shows a suitable framework to separate moving objects from
the background. The most representative problem formulation is the Robust
Principal Component Analysis (RPCA) solved via Principal Component Pursuit
(PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix.
However, similar robust implicit or explicit decompositions can be made in the
following problem formulations: Robust Non-negative Matrix Factorization
(RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust
Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal
of these similar problem formulations is to obtain explicitly or implicitly a
decomposition into low-rank matrix plus additive matrices. In this context,
this work aims to initiate a rigorous and comprehensive review of the similar
problem formulations in robust subspace learning and tracking based on
decomposition into low-rank plus additive matrices for testing and ranking
existing algorithms for background/foreground separation. For this, we first
provide a preliminary review of the recent developments in the different
problem formulations which allows us to define a unified view that we called
Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine
carefully each method in each robust subspace learning/tracking frameworks with
their decomposition, their loss functions, their optimization problem and their
solvers. Furthermore, we investigate if incremental algorithms and real-time
implementations can be achieved for background/foreground separation. Finally,
experimental results on a large-scale dataset called Background Models
Challenge (BMC 2012) show the comparative performance of 32 different robust
subspace learning/tracking methods.Comment: 121 pages, 5 figures, submitted to Computer Science Review. arXiv
admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297,
arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805,
arXiv:1403.8067 by other authors, Computer Science Review, November 201
Bayesian Identification of Fixations, Saccades, and Smooth Pursuits
Smooth pursuit eye movements provide meaningful insights and information on
subject's behavior and health and may, in particular situations, disturb the
performance of typical fixation/saccade classification algorithms. Thus, an
automatic and efficient algorithm to identify these eye movements is paramount
for eye-tracking research involving dynamic stimuli. In this paper, we propose
the Bayesian Decision Theory Identification (I-BDT) algorithm, a novel
algorithm for ternary classification of eye movements that is able to reliably
separate fixations, saccades, and smooth pursuits in an online fashion, even
for low-resolution eye trackers. The proposed algorithm is evaluated on four
datasets with distinct mixtures of eye movements, including fixations,
saccades, as well as straight and circular smooth pursuits; data was collected
with a sample rate of 30 Hz from six subjects, totaling 24 evaluation datasets.
The algorithm exhibits high and consistent performance across all datasets and
movements relative to a manual annotation by a domain expert (recall: \mu =
91.42%, \sigma = 9.52%; precision: \mu = 95.60%, \sigma = 5.29%; specificity
\mu = 95.41%, \sigma = 7.02%) and displays a significant improvement when
compared to I-VDT, an state-of-the-art algorithm (recall: \mu = 87.67%, \sigma
= 14.73%; precision: \mu = 89.57%, \sigma = 8.05%; specificity \mu = 92.10%,
\sigma = 11.21%). For algorithm implementation and annotated datasets, please
contact the first author.Comment: 8 page
- …