34,394 research outputs found
Bayesian Methods in Tensor Analysis
Tensors, also known as multidimensional arrays, are useful data structures in
machine learning and statistics. In recent years, Bayesian methods have emerged
as a popular direction for analyzing tensor-valued data since they provide a
convenient way to introduce sparsity into the model and conduct uncertainty
quantification. In this article, we provide an overview of frequentist and
Bayesian methods for solving tensor completion and regression problems, with a
focus on Bayesian methods. We review common Bayesian tensor approaches
including model formulation, prior assignment, posterior computation, and
theoretical properties. We also discuss potential future directions in this
field.Comment: 32 pages, 8 figures, 2 table
Tensor Analysis and Fusion of Multimodal Brain Images
Current high-throughput data acquisition technologies probe dynamical systems
with different imaging modalities, generating massive data sets at different
spatial and temporal resolutions posing challenging problems in multimodal data
fusion. A case in point is the attempt to parse out the brain structures and
networks that underpin human cognitive processes by analysis of different
neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the
multimodal, multi-scale nature of neuroimaging data is well reflected by a
multi-way (tensor) structure where the underlying processes can be summarized
by a relatively small number of components or "atoms". We introduce
Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network
notation in order to analyze these models. These diagrams not only clarify
matrix and tensor EEG and fMRI time/frequency analysis and inverse problems,
but also help understand multimodal fusion via Multiway Partial Least Squares
and Coupled Matrix-Tensor Factorization. We show here, for the first time, that
Granger causal analysis of brain networks is a tensor regression problem, thus
allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI
recordings shows the potential of the methods and suggests their use in other
scientific domains.Comment: 23 pages, 15 figures, submitted to Proceedings of the IEE
Bayesian model selection for electromagnetic kaon production on the nucleon
We present the results of a Bayesian analysis of a Regge model to describe
the background contribution for K+ Lambda and K+ Sigma0 photoproduction. The
model is based on the exchange of K+(494) and K*+(892) trajectories in the
t-channel. We utilise the Bayesian evidence Z to determine the best model
variant for each channel. The Bayesian evidence integrals were calculated using
the Nested Sampling algorithm. For different prior widths, we find decisive
Bayesian evidence (\Delta ln Z ~ 24) for a K+ Lambda photoproduction Regge
model with a positive vector coupling and a negative tensor coupling constant
for the K*+(892) trajectory, and a rotating phase factor for both trajectories.
Using the chi^2 minimisation method, one could not draw this conclusion from
the same dataset. For the K+ Sigma0 photoproduction Regge model, on the other
hand, the difference between the evidence integrals is insufficient to pinpoint
one model variant.Comment: 13 pages, 4 figure
Bayesian factorizations of big sparse tensors
It has become routine to collect data that are structured as multiway arrays
(tensors). There is an enormous literature on low rank and sparse matrix
factorizations, but limited consideration of extensions to the tensor case in
statistics. The most common low rank tensor factorization relies on parallel
factor analysis (PARAFAC), which expresses a rank tensor as a sum of rank
one tensors. When observations are only available for a tiny subset of the
cells of a big tensor, the low rank assumption is not sufficient and PARAFAC
has poor performance. We induce an additional layer of dimension reduction by
allowing the effective rank to vary across dimensions of the table. For
concreteness, we focus on a contingency table application. Taking a Bayesian
approach, we place priors on terms in the factorization and develop an
efficient Gibbs sampler for posterior computation. Theory is provided showing
posterior concentration rates in high-dimensional settings, and the methods are
shown to have excellent performance in simulations and several real data
applications
- …