346 research outputs found
A matrix product algorithm for stochastic dynamics on networks, applied to non-equilibrium Glauber dynamics
We introduce and apply a novel efficient method for the precise simulation of
stochastic dynamical processes on locally tree-like graphs. Networks with
cycles are treated in the framework of the cavity method. Such models
correspond, for example, to spin-glass systems, Boolean networks, neural
networks, or other technological, biological, and social networks. Building
upon ideas from quantum many-body theory, the new approach is based on a matrix
product approximation of the so-called edge messages -- conditional
probabilities of vertex variable trajectories. Computation costs and accuracy
can be tuned by controlling the matrix dimensions of the matrix product edge
messages (MPEM) in truncations. In contrast to Monte Carlo simulations, the
algorithm has a better error scaling and works for both, single instances as
well as the thermodynamic limit. We employ it to examine prototypical
non-equilibrium Glauber dynamics in the kinetic Ising model. Because of the
absence of cancellation effects, observables with small expectation values can
be evaluated accurately, allowing for the study of decay processes and temporal
correlations.Comment: 5 pages, 3 figures; minor improvements, published versio
A Distributed and Incremental SVD Algorithm for Agglomerative Data Analysis on Large Networks
In this paper, we show that the SVD of a matrix can be constructed
efficiently in a hierarchical approach. Our algorithm is proven to recover the
singular values and left singular vectors if the rank of the input matrix
is known. Further, the hierarchical algorithm can be used to recover the
largest singular values and left singular vectors with bounded error. We also
show that the proposed method is stable with respect to roundoff errors or
corruption of the original matrix entries. Numerical experiments validate the
proposed algorithms and parallel cost analysis
Online Matrix Completion Through Nuclear Norm Regularisation
It is the main goal of this paper to propose a novel method to perform matrix
completion on-line. Motivated by a wide variety of applications, ranging from
the design of recommender systems to sensor network localization through
seismic data reconstruction, we consider the matrix completion problem when
entries of the matrix of interest are observed gradually. Precisely, we place
ourselves in the situation where the predictive rule should be refined
incrementally, rather than recomputed from scratch each time the sample of
observed entries increases. The extension of existing matrix completion methods
to the sequential prediction context is indeed a major issue in the Big Data
era, and yet little addressed in the literature. The algorithm promoted in this
article builds upon the Soft Impute approach introduced in Mazumder et al.
(2010). The major novelty essentially arises from the use of a randomised
technique for both computing and updating the Singular Value Decomposition
(SVD) involved in the algorithm. Though of disarming simplicity, the method
proposed turns out to be very efficient, while requiring reduced computations.
Several numerical experiments based on real datasets illustrating its
performance are displayed, together with preliminary results giving it a
theoretical basis.Comment: Corrected a typo in the affiliatio
An Efficient, Memory-Saving Approach for the Loewner Framework
The Loewner framework is one of the most successful data-driven model order reduction techniques. If N is the cardinality of a given data set, the so-called Loewner and shifted Loewner matrices [Formula: see text] and [Formula: see text] can be defined by solely relying on information encoded in the considered data set and they play a crucial role in the computation of the sought rational model approximation.In particular, the singular value decomposition of a linear combination of [Formula: see text] and [Formula: see text] provides the tools needed to construct accurate models which fulfill important approximation properties with respect to the original data set. However, for highly-sampled data sets, the dense nature of [Formula: see text] and [Formula: see text] leads to numerical difficulties, namely the failure to allocate these matrices in certain memory-limited environments or excessive computational costs. Even though they do not possess any sparsity pattern, the Loewner and shifted Loewner matrices are extremely structured and, in this paper, we show how to fully exploit their Cauchy-like structure to reduce the cost of computing accurate rational models while avoiding the explicit allocation of [Formula: see text] and [Formula: see text] . In particular, the use of the hierarchically semiseparable format allows us to remarkably lower both the computational cost and the memory requirements of the Loewner framework obtaining a novel scheme whose costs scale with [Formula: see text]
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
- …