45,359 research outputs found
The Matrix Ridge Approximation: Algorithms and Applications
We are concerned with an approximation problem for a symmetric positive
semidefinite matrix due to motivation from a class of nonlinear machine
learning methods. We discuss an approximation approach that we call {matrix
ridge approximation}. In particular, we define the matrix ridge approximation
as an incomplete matrix factorization plus a ridge term. Moreover, we present
probabilistic interpretations using a normal latent variable model and a
Wishart model for this approximation approach. The idea behind the latent
variable model in turn leads us to an efficient EM iterative method for
handling the matrix ridge approximation problem. Finally, we illustrate the
applications of the approximation approach in multivariate data analysis.
Empirical studies in spectral clustering and Gaussian process regression show
that the matrix ridge approximation with the EM iteration is potentially
useful
Bayesian Updating, Model Class Selection and Robust Stochastic Predictions of Structural Response
A fundamental issue when predicting structural response by using mathematical models is how to treat both modeling and excitation uncertainty. A general framework for this is presented which uses probability as a multi-valued
conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The
fundamental probability models that represent the structure’s uncertain behavior are specified by the choice of a stochastic
system model class: a set of input-output probability models for the structure and a prior probability distribution over this set
that quantifies the relative plausibility of each model. A model class can be constructed from a parameterized deterministic
structural model by stochastic embedding utilizing Jaynes’ Principle of Maximum Information Entropy. Robust predictive
analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if
structural response data is available, by its posterior probability from Bayes’ Theorem for the model class. Additional robustness
to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates
weighted by the prior or posterior probability of the model class, the latter being computed from Bayes’ Theorem. This higherlevel application of Bayes’ Theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more
complex model classes that extract more information from the data. Robust predictive analyses involve integrals over highdimensional spaces that usually must be evaluated numerically. Published applications have used Laplace's method of
asymptotic approximation or Markov Chain Monte Carlo algorithms
Probabilistic Constraint Logic Programming
This paper addresses two central problems for probabilistic processing
models: parameter estimation from incomplete data and efficient retrieval of
most probable analyses. These questions have been answered satisfactorily only
for probabilistic regular and context-free models. We address these problems
for a more expressive probabilistic constraint logic programming model. We
present a log-linear probability model for probabilistic constraint logic
programming. On top of this model we define an algorithm to estimate the
parameters and to select the properties of log-linear models from incomplete
data. This algorithm is an extension of the improved iterative scaling
algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm
applies to log-linear models in general and is accompanied with suitable
approximation methods when applied to large data spaces. Furthermore, we
present an approach for searching for most probable analyses of the
probabilistic constraint logic programming model. This method can be applied to
the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl
Bayesian Robust Tensor Factorization for Incomplete Multiway Data
We propose a generative model for robust tensor factorization in the presence
of both missing data and outliers. The objective is to explicitly infer the
underlying low-CP-rank tensor capturing the global information and a sparse
tensor capturing the local information (also considered as outliers), thus
providing the robust predictive distribution over missing entries. The
low-CP-rank tensor is modeled by multilinear interactions between multiple
latent factors on which the column sparsity is enforced by a hierarchical
prior, while the sparse tensor is modeled by a hierarchical view of Student-
distribution that associates an individual hyperparameter with each element
independently. For model learning, we develop an efficient closed-form
variational inference under a fully Bayesian treatment, which can effectively
prevent the overfitting problem and scales linearly with data size. In contrast
to existing related works, our method can perform model selection automatically
and implicitly without need of tuning parameters. More specifically, it can
discover the groundtruth of CP rank and automatically adapt the sparsity
inducing priors to various types of outliers. In addition, the tradeoff between
the low-rank approximation and the sparse representation can be optimized in
the sense of maximum model evidence. The extensive experiments and comparisons
with many state-of-the-art algorithms on both synthetic and real-world datasets
demonstrate the superiorities of our method from several perspectives.Comment: in IEEE Transactions on Neural Networks and Learning Systems, 201
- …