375 research outputs found
Tensor Numerical Methods in Quantum Chemistry: from Hartree-Fock Energy to Excited States
We resume the recent successes of the grid-based tensor numerical methods and
discuss their prospects in real-space electronic structure calculations. These
methods, based on the low-rank representation of the multidimensional functions
and integral operators, led to entirely grid-based tensor-structured 3D
Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core
Hamiltonian and two-electron integrals (TEI) in complexity using
the rank-structured approximation of basis functions, electron densities and
convolution integral operators all represented on 3D
Cartesian grids. The algorithm for calculating TEI tensor in a form of the
Cholesky decomposition is based on multiple factorizations using algebraic 1D
``density fitting`` scheme. The basis functions are not restricted to separable
Gaussians, since the analytical integration is substituted by high-precision
tensor-structured numerical quadratures. The tensor approaches to
post-Hartree-Fock calculations for the MP2 energy correction and for the
Bethe-Salpeter excited states, based on using low-rank factorizations and the
reduced basis method, were recently introduced. Another direction is related to
the recent attempts to develop a tensor-based Hartree-Fock numerical scheme for
finite lattice-structured systems, where one of the numerical challenges is the
summation of electrostatic potentials of a large number of nuclei. The 3D
grid-based tensor method for calculation of a potential sum on a lattice manifests the linear in computational work, ,
instead of the usual scaling by the Ewald-type approaches
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Efficient Quantum Transforms
Quantum mechanics requires the operation of quantum computers to be unitary,
and thus makes it important to have general techniques for developing fast
quantum algorithms for computing unitary transforms. A quantum routine for
computing a generalized Kronecker product is given. Applications include
re-development of the networks for computing the Walsh-Hadamard and the quantum
Fourier transform. New networks for two wavelet transforms are given. Quantum
computation of Fourier transforms for non-Abelian groups is defined. A slightly
relaxed definition is shown to simplify the analysis and the networks that
computes the transforms. Efficient networks for computing such transforms for a
class of metacyclic groups are introduced. A novel network for computing a
Fourier transform for a group used in quantum error-correction is also given.Comment: 30 pages, LaTeX2e, 7 figures include
Filters and Matrix Factorization
We give a number of explicit matrix-algorithms for analysis/synthesis
in multi-phase filtering; i.e., the operation on discrete-time signals which
allow a separation into frequency-band components, one for each of the
ranges of bands, say N , starting with low-pass, and then corresponding
filtering in the other band-ranges. If there are N bands, the individual
filters will be combined into a single matrix action; so a representation of
the combined operation on all N bands by an N x N matrix, where the
corresponding matrix-entries are periodic functions; or their extensions to
functions of a complex variable. Hence our setting entails a fixed N x N
matrix over a prescribed algebra of functions of a complex variable. In the
case of polynomial filters, the factorizations will always be finite. A novelty
here is that we allow for a wide family of non-polynomial filter-banks.
Working modulo N in the time domain, our approach also allows for
a natural matrix-representation of both down-sampling and up-sampling.
The implementation encompasses the combined operation on input, filtering,
down-sampling, transmission, up-sampling, an action by dual filters,
and synthesis, merges into a single matrix operation. Hence our matrixfactorizations
break down the global filtering-process into elementary steps.
To accomplish this, we offer a number of adapted matrix factorizationalgorithms,
such that each factor in our product representation implements
in a succession of steps the filtering across pairs of frequency-bands; and so
it is of practical significance in implementing signal processing, including
filtering of digitized images. Our matrix-factorizations are especially useful
in the case of the processing a fixed, but large, number of bands
Sample Complexity of Dictionary Learning and other Matrix Factorizations
Many modern tools in machine learning and signal processing, such as sparse
dictionary learning, principal component analysis (PCA), non-negative matrix
factorization (NMF), -means clustering, etc., rely on the factorization of a
matrix obtained by concatenating high-dimensional vectors from a training
collection. While the idealized task would be to optimize the expected quality
of the factors over the underlying distribution of training vectors, it is
achieved in practice by minimizing an empirical average over the considered
collection. The focus of this paper is to provide sample complexity estimates
to uniformly control how much the empirical average deviates from the expected
cost function. Standard arguments imply that the performance of the empirical
predictor also exhibit such guarantees. The level of genericity of the approach
encompasses several possible constraints on the factors (tensor product
structure, shift-invariance, sparsity \ldots), thus providing a unified
perspective on the sample complexity of several widely used matrix
factorization schemes. The derived generalization bounds behave proportional to
w.r.t.\ the number of samples for the considered matrix
factorization techniques.Comment: to appea
- âŠ