2,488 research outputs found
Sparse tensor product wavelet approximation of singular functions
International audienceOn product domains, sparse-grid approximation yields optimal, dimension independent convergence rates when the function that is approximated has L^2-bounded mixed derivatives of a sufficiently high order. We show that the solution of Poisson's equation on the n-dimensional hypercube with Dirichlet boundary conditions and smooth right-hand side generally does not satisfy this condition. As suggested by P.-A. Nitsche in [Constr. Approx., 21(1) (2005), pp. 63--81], the regularity conditions can be relaxed to corresponding ones in weighted L^2 spaces when the sparse-grid approach is combined with local refinement of the set of one-dimensional wavelets indices towards the end points. In this paper, we prove that for general smooth right-hand sides, the solution of Poisson's problem satisfies these relaxed regularity conditions in any space dimension. Furthermore, since we remove log-factors from the energy-error estimates from Nitsche's work, we show that in any space dimension, locally refined sparse-grid approximation yields the optimal, dimension independent convergence rate
Adaptive Low-Rank Methods for Problems on Sobolev Spaces with Error Control in
Low-rank tensor methods for the approximate solution of second-order elliptic
partial differential equations in high dimensions have recently attracted
significant attention. A critical issue is to rigorously bound the error of
such approximations, not with respect to a fixed finite dimensional discrete
background problem, but with respect to the exact solution of the continuous
problem. While the energy norm offers a natural error measure corresponding to
the underlying operator considered as an isomorphism from the energy space onto
its dual, this norm requires a careful treatment in its interplay with the
tensor structure of the problem. In this paper we build on our previous work on
energy norm-convergent subspace-based tensor schemes contriving, however, a
modified formulation which now enforces convergence only in . In order to
still be able to exploit the mapping properties of elliptic operators, a
crucial ingredient of our approach is the development and analysis of a
suitable asymmetric preconditioning scheme. We provide estimates for the
computational complexity of the resulting method in terms of the solution error
and study the practical performance of the scheme in numerical experiments. In
both regards, we find that controlling solution errors in this weaker norm
leads to substantial simplifications and to a reduction of the actual numerical
work required for a certain error tolerance.Comment: 26 pages, 7 figure
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
Tensor Numerical Methods in Quantum Chemistry: from Hartree-Fock Energy to Excited States
We resume the recent successes of the grid-based tensor numerical methods and
discuss their prospects in real-space electronic structure calculations. These
methods, based on the low-rank representation of the multidimensional functions
and integral operators, led to entirely grid-based tensor-structured 3D
Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core
Hamiltonian and two-electron integrals (TEI) in complexity using
the rank-structured approximation of basis functions, electron densities and
convolution integral operators all represented on 3D
Cartesian grids. The algorithm for calculating TEI tensor in a form of the
Cholesky decomposition is based on multiple factorizations using algebraic 1D
``density fitting`` scheme. The basis functions are not restricted to separable
Gaussians, since the analytical integration is substituted by high-precision
tensor-structured numerical quadratures. The tensor approaches to
post-Hartree-Fock calculations for the MP2 energy correction and for the
Bethe-Salpeter excited states, based on using low-rank factorizations and the
reduced basis method, were recently introduced. Another direction is related to
the recent attempts to develop a tensor-based Hartree-Fock numerical scheme for
finite lattice-structured systems, where one of the numerical challenges is the
summation of electrostatic potentials of a large number of nuclei. The 3D
grid-based tensor method for calculation of a potential sum on a lattice manifests the linear in computational work, ,
instead of the usual scaling by the Ewald-type approaches
Compressive Space-Time Galerkin Discretizations of Parabolic Partial Differential Equations
We study linear parabolic initial-value problems in a space-time variational
formulation based on fractional calculus. This formulation uses "time
derivatives of order one half" on the bi-infinite time axis. We show that for
linear, parabolic initial-boundary value problems on , the
corresponding bilinear form admits an inf-sup condition with sparse tensor
product trial and test function spaces. We deduce optimality of compressive,
space-time Galerkin discretizations, where stability of Galerkin approximations
is implied by the well-posedness of the parabolic operator equation. The
variational setting adopted here admits more general Riesz bases than previous
work; in particular, no stability in negative order Sobolev spaces on the
spatial or temporal domains is required of the Riesz bases accommodated by the
present formulation. The trial and test spaces are based on Sobolev spaces of
equal order with respect to the temporal variable. Sparse tensor products
of multi-level decompositions of the spatial and temporal spaces in Galerkin
discretizations lead to large, non-symmetric linear systems of equations. We
prove that their condition numbers are uniformly bounded with respect to the
discretization level. In terms of the total number of degrees of freedom, the
convergence orders equal, up to logarithmic terms, those of best -term
approximations of solutions of the corresponding elliptic problems.Comment: 26 page
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
- …