590 research outputs found
Randomized Dynamic Mode Decomposition
This paper presents a randomized algorithm for computing the near-optimal
low-rank dynamic mode decomposition (DMD). Randomized algorithms are emerging
techniques to compute low-rank matrix approximations at a fraction of the cost
of deterministic algorithms, easing the computational challenges arising in the
area of `big data'. The idea is to derive a small matrix from the
high-dimensional data, which is then used to efficiently compute the dynamic
modes and eigenvalues. The algorithm is presented in a modular probabilistic
framework, and the approximation quality can be controlled via oversampling and
power iterations. The effectiveness of the resulting randomized DMD algorithm
is demonstrated on several benchmark examples of increasing complexity,
providing an accurate and efficient approach to extract spatiotemporal coherent
structures from big data in a framework that scales with the intrinsic rank of
the data, rather than the ambient measurement dimension. For this work we
assume that the dynamics of the problem under consideration is evolving on a
low-dimensional subspace that is well characterized by a fast decaying singular
value spectrum
Decay properties of spectral projectors with applications to electronic structure
Motivated by applications in quantum chemistry and solid state physics, we
apply general results from approximation theory and matrix analysis to the
study of the decay properties of spectral projectors associated with large and
sparse Hermitian matrices. Our theory leads to a rigorous proof of the
exponential off-diagonal decay ("nearsightedness") for the density matrix of
gapped systems at zero electronic temperature in both orthogonal and
non-orthogonal representations, thus providing a firm theoretical basis for the
possibility of linear scaling methods in electronic structure calculations for
non-metallic systems. We further discuss the case of density matrices for
metallic systems at positive electronic temperature. A few other possible
applications are also discussed.Comment: 63 pages, 13 figure
CECM: A continuous empirical cubature method with application to the dimensional hyperreduction of parameterized finite element models
We present the Continuous Empirical Cubature Method (CECM), a novel algorithm
for empirically devising efficient integration rules. The CECM aims to improve
existing cubature methods by producing rules that are close to the optimal,
featuring far less points than the number of functions to integrate.
The CECM consists on a two-stage strategy. First, a point selection strategy
is applied for obtaining an initial approximation to the cubature rule,
featuring as many points as functions to integrate. The second stage consists
in a sparsification strategy in which, alongside the indexes and corresponding
weights, the spatial coordinates of the points are also considered as design
variables. The positions of the initially selected points are changed to render
their associated weights to zero, and in this way, the minimum number of points
is achieved.
Although originally conceived within the framework of hyper-reduced order
models (HROMs), we present the method's formulation in terms of generic
vector-valued functions, thereby accentuating its versatility across various
problem domains. To demonstrate the extensive applicability of the method, we
conduct numerical validations using univariate and multivariate Lagrange
polynomials. In these cases, we show the method's capacity to retrieve the
optimal Gaussian rule. We also asses the method for an arbitrary
exponential-sinusoidal function in a 3D domain, and finally consider an example
of the application of the method to the hyperreduction of a multiscale finite
element model, showcasing notable computational performance gains.
A secondary contribution of the current paper is the Sequential Randomized
SVD (SRSVD) approach for computing the Singular Value Decomposition (SVD) in a
column-partitioned format. The SRSVD is particularly advantageous when matrix
sizes approach memory limitations
Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond
In this and a set of companion whitepapers, the USQCD Collaboration lays out
a program of science and computing for lattice gauge theory. These whitepapers
describe how calculation using lattice QCD (and other gauge theories) can aid
the interpretation of ongoing and upcoming experiments in particle and nuclear
physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers
O(N) methods in electronic structure calculations
Linear scaling methods, or O(N) methods, have computational and memory
requirements which scale linearly with the number of atoms in the system, N, in
contrast to standard approaches which scale with the cube of the number of
atoms. These methods, which rely on the short-ranged nature of electronic
structure, will allow accurate, ab initio simulations of systems of
unprecedented size. The theory behind the locality of electronic structure is
described and related to physical properties of systems to be modelled, along
with a survey of recent developments in real-space methods which are important
for efficient use of high performance computers. The linear scaling methods
proposed to date can be divided into seven different areas, and the
applicability, efficiency and advantages of the methods proposed in these areas
is then discussed. The applications of linear scaling methods, as well as the
implementations available as computer programs, are considered. Finally, the
prospects for and the challenges facing linear scaling methods are discussed.Comment: 85 pages, 15 figures, 488 references. Resubmitted to Rep. Prog. Phys
(small changes
- …