144 research outputs found

    Simultaneous Approximation of a Multivariate Function and its Derivatives by Multilinear Splines

    Get PDF
    In this paper we consider the approximation of a function by its interpolating multilinear spline and the approximation of its derivatives by the derivatives of the corresponding spline. We derive formulas for the uniform approximation error on classes of functions with moduli of continuity bounded above by certain majorants.Comment: 21 page

    Low-rank multi-parametric covariance identification

    Full text link
    We propose a differential geometric construction for families of low-rank covariance matrices, via interpolation on low-rank matrix manifolds. In contrast with standard parametric covariance classes, these families offer significant flexibility for problem-specific tailoring via the choice of "anchor" matrices for the interpolation. Moreover, their low-rank facilitates computational tractability in high dimensions and with limited data. We employ these covariance families for both interpolation and identification, where the latter problem comprises selecting the most representative member of the covariance family given a data set. In this setting, standard procedures such as maximum likelihood estimation are nontrivial because the covariance family is rank-deficient; we resolve this issue by casting the identification problem as distance minimization. We demonstrate the power of these differential geometric families for interpolation and identification in a practical application: wind field covariance approximation for unmanned aerial vehicle navigation

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Improvements to NESTLE: Cross Section Interpolation and \u3ci\u3eN\u3c/i\u3e-Group Extension

    Get PDF
    The NESTLE program is a few-group neutron diffusion reactor core simulator code utilizing the nodal expansion method (NEM). This thesis presents two improvements made to NESTLE regarding cross-section interpolation and multigroup capability. To quickly and accurately obtain cross sections from lattice physics input data, a new cross section interpolation routine was developed utilizing multidimensional radial basis function interpolation, also known as thin plate spline interpolation. Testing showed that, for existing NESTLE lattice physics input, accuracy was retained but not improved and processing time was longer. However, the new interpolation routine was shown allow much greater exibility in the case matrix of the the lattice physics input file. This allows for much more detailed modeling of cross section variation at the expense of computation time. The existing capability of NESTLE to use two or four neutron energy groups in the NEM calculation was supplemented with a new routine to allow the use of an arbitrary number of neutron energy groups by calling existing, widely used linear algebra libraries. This represents a significant expansion of NESTLE\u27s capability to model a broader ranger of reactor types beyond typical light water reactors (LWRs). Testing revealed that the new NEM routines retained the accuracy and speed of the existing routines for two and four energy groups, while calculations with other numbers of energy groups had adequate accuracy and speed for practical use

    Wavelet Decompositions of Nonrefinable Shift Invariant Spaces

    Get PDF
    AbstractThe motivation for this work is a recently constructed family of generators of shift invariant spaces with certain optimal approximation properties, but which are not refinable in the classical sense. We try to see whether, once the classical refinability requirement is removed, it is still possible to construct meaningful wavelet decompositions of dilates of the shift invariant space that are well suited for applications

    Sampling—50 Years After Shannon

    Get PDF
    This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we re-interpret Shannon's sampling procedure as an orthogonal projection onto the subspace of bandlimited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of "shift-invariant" functions spaces, including splines and wavelets. Practically, this allows for simpler—and possibly more realistic—interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) pre-filters that are not necessarily ideal lowpass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., non-bandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multi-wavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned
    • …
    corecore