62,848 research outputs found

    Nonlinear tensor product approximation of functions

    Full text link
    We are interested in approximation of a multivariate function f(x1,…,xd)f(x_1,\dots,x_d) by linear combinations of products u1(x1)⋯ud(xd)u^1(x_1)\cdots u^d(x_d) of univariate functions ui(xi)u^i(x_i), i=1,…,di=1,\dots,d. In the case d=2d=2 it is a classical problem of bilinear approximation. In the case of approximation in the L2L_2 space the bilinear approximation problem is closely related to the problem of singular value decomposition (also called Schmidt expansion) of the corresponding integral operator with the kernel f(x1,x2)f(x_1,x_2). There are known results on the rate of decay of errors of best bilinear approximation in LpL_p under different smoothness assumptions on ff. The problem of multilinear approximation (nonlinear tensor product approximation) in the case d≥3d\ge 3 is more difficult and much less studied than the bilinear approximation problem. We will present results on best multilinear approximation in LpL_p under mixed smoothness assumption on ff

    A continuous analogue of the tensor-train decomposition

    Full text link
    We develop new approximation algorithms and data structures for representing and computing with multivariate functions using the functional tensor-train (FT), a continuous extension of the tensor-train (TT) decomposition. The FT represents functions using a tensor-train ansatz by replacing the three-dimensional TT cores with univariate matrix-valued functions. The main contribution of this paper is a framework to compute the FT that employs adaptive approximations of univariate fibers, and that is not tied to any tensorized discretization. The algorithm can be coupled with any univariate linear or nonlinear approximation procedure. We demonstrate that this approach can generate multivariate function approximations that are several orders of magnitude more accurate, for the same cost, than those based on the conventional approach of compressing the coefficient tensor of a tensor-product basis. Our approach is in the spirit of other continuous computation packages such as Chebfun, and yields an algorithm which requires the computation of "continuous" matrix factorizations such as the LU and QR decompositions of vector-valued functions. To support these developments, we describe continuous versions of an approximate maximum-volume cross approximation algorithm and of a rounding algorithm that re-approximates an FT by one of lower ranks. We demonstrate that our technique improves accuracy and robustness, compared to TT and quantics-TT approaches with fixed parameterizations, of high-dimensional integration, differentiation, and approximation of functions with local features such as discontinuities and other nonlinearities

    Bivariate Segment Approximation

    Get PDF
    In this note we state some problems on approximation by univariate splines with free knots, bivariate segment approximation and tensor product splines with variable knot lines. There is a vast literature on approximation and interpolation by univariate splines with fixed knots (see e.g. the books of de Boor [1], Braess [2], DeVore & Lorentz [4], Powell [20], Schumaker [21], Nürnberger [13] and the book of Chui [3] on multivariate splines). On the other hand, numerical examples show that in general, the error is much smaller if variable knots are used for the approximation of functions instead of fixed knots. This is true for univariate splines as well as for bivariate splines. But approximation by splines with free knots leads to rather difficult nonlinear problems.[...

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors
    • …
    corecore