107 research outputs found

    Matrix-free weighted quadrature for a computationally efficient isogeometric kk-method

    Full text link
    The kk-method is the isogeometric method based on splines (or NURBS, etc.) with maximum regularity. When implemented following the paradigms of classical finite element methods, the computational resources required by the kk-method are prohibitive even for moderate degree. In order to address this issue, we propose a matrix-free strategy combined with weighted quadrature, which is an ad-hoc strategy to compute the integrals of the Galerkin system. Matrix-free weighted quadrature (MF-WQ) speeds up matrix operations, and, perhaps even more important, greatly reduces memory consumption. Our strategy also requires an efficient preconditioner for the linear system iterative solver. In this work we deal with an elliptic model problem, and adopt a preconditioner based on the Fast Diagonalization method, an old idea to solve Sylvester-like equations. Our numerical tests show that the isogeometric solver based on MF-WQ is faster than standard approaches (where the main cost is the matrix formation by standard Gaussian quadrature) even for low degree. But the main achievement is that, with MF-WQ, the kk-method gets orders of magnitude faster by increasing the degree, given a target accuracy. Therefore, we are able to show the superiority, in terms of computational efficiency, of the high-degree kk-method with respect to low-degree isogeometric discretizations. What we present here is applicable to more complex and realistic differential problems, but its effectiveness will depend on the preconditioner stage, which is as always problem-dependent. This situation is typical of modern high-order methods: the overall performance is mainly related to the quality of the preconditioner

    Computing Large-Scale Matrix and Tensor Decomposition with Structured Factors: A Unified Nonconvex Optimization Perspective

    Full text link
    The proposed article aims at offering a comprehensive tutorial for the computational aspects of structured matrix and tensor factorization. Unlike existing tutorials that mainly focus on {\it algorithmic procedures} for a small set of problems, e.g., nonnegativity or sparsity-constrained factorization, we take a {\it top-down} approach: we start with general optimization theory (e.g., inexact and accelerated block coordinate descent, stochastic optimization, and Gauss-Newton methods) that covers a wide range of factorization problems with diverse constraints and regularization terms of engineering interest. Then, we go `under the hood' to showcase specific algorithm design under these introduced principles. We pay a particular attention to recent algorithmic developments in structured tensor and matrix factorization (e.g., random sketching and adaptive step size based stochastic optimization and structure-exploiting second-order algorithms), which are the state of the art---yet much less touched upon in the literature compared to {\it block coordinate descent} (BCD)-based methods. We expect that the article to have an educational values in the field of structured factorization and hope to stimulate more research in this important and exciting direction.Comment: Final Version; to appear in IEEE Signal Processing Magazine; title revised to comply with the journal's rul

    Structure-Preserving Model Reduction of Physical Network Systems

    Get PDF
    This paper considers physical network systems where the energy storage is naturally associated to the nodes of the graph, while the edges of the graph correspond to static couplings. The first sections deal with the linear case, covering examples such as mass-damper and hydraulic systems, which have a structure that is similar to symmetric consensus dynamics. The last section is concerned with a specific class of nonlinear physical network systems; namely detailed-balanced chemical reaction networks governed by mass action kinetics. In both cases, linear and nonlinear, the structure of the dynamics is similar, and is based on a weighted Laplacian matrix, together with an energy function capturing the energy storage at the nodes. We discuss two methods for structure-preserving model reduction. The first one is clustering; aggregating the nodes of the underlying graph to obtain a reduced graph. The second approach is based on neglecting the energy storage at some of the nodes, and subsequently eliminating those nodes (called Kron reduction).</p

    Exponential integrators: tensor structured problems and applications

    Get PDF
    The solution of stiff systems of Ordinary Differential Equations (ODEs), that typically arise after spatial discretization of many important evolutionary Partial Differential Equations (PDEs), constitutes a topic of wide interest in numerical analysis. A prominent way to numerically integrate such systems involves using exponential integrators. In general, these kinds of schemes do not require the solution of (non)linear systems but rather the action of the matrix exponential and of some specific exponential-like functions (known in the literature as phi-functions). In this PhD thesis we aim at presenting efficient tensor-based tools to approximate such actions, both from a theoretical and from a practical point of view, when the problem has an underlying Kronecker sum structure. Moreover, we investigate the application of exponential integrators to compute numerical solutions of important equations in various fields, such as plasma physics, mean-field optimal control and computational chemistry. In any case, we provide several numerical examples and we perform extensive simulations, eventually exploiting modern hardware architectures such as multi-core Central Processing Units (CPUs) and Graphic Processing Units (GPUs). The results globally show the effectiveness and the superiority of the different approaches proposed

    Nonlinear Systems

    Get PDF
    The editors of this book have incorporated contributions from a diverse group of leading researchers in the field of nonlinear systems. To enrich the scope of the content, this book contains a valuable selection of works on fractional differential equations.The book aims to provide an overview of the current knowledge on nonlinear systems and some aspects of fractional calculus. The main subject areas are divided into two theoretical and applied sections. Nonlinear systems are useful for researchers in mathematics, applied mathematics, and physics, as well as graduate students who are studying these systems with reference to their theory and application. This book is also an ideal complement to the specific literature on engineering, biology, health science, and other applied science areas. The opportunity given by IntechOpen to offer this book under the open access system contributes to disseminating the field of nonlinear systems to a wide range of researchers

    Numerical solution of large-scale linear matrix equations

    Get PDF
    We are interested in the numerical solution of large-scale linear matrix equations. In particular, due to their occurrence in many applications, we study the so-called Sylvester and Lyapunov equations. A characteristic aspect of the large-scale setting is that although data are sparse, the solution is in general dense so that storing it may be unfeasible. Therefore, it is necessary that the solution allows for a memory-saving approximation that can be cheaply stored. An extensive literature treats the case of the aforementioned equations with low-rank right-hand side. This assumption, together with certain hypotheses on the spectral distribution of the matrix coefficients, is a sufficient condition for proving a fast decay in the singular values of the solution. This decay motivates the search for a low-rank approximation so that only low-rank matrices are actually computed and stored remarkably reducing the storage demand. This is the task of the so-called low-rank methods and a large amount of work in this direction has been carried out in the last years. Projection methods have been shown to be among the most effective low-rank methods and in the first part of this thesis we propose some computational enhanchements of the classical algorithms. The case of equations with not necessarily low rank right-hand side has not been deeply analyzed so far and efficient methods are still lacking in the literature. In this thesis we aim to significantly contribute to this open problem by introducing solution methods for this kind of equations. In particular, we address the case when the coefficient matrices and the right-hand side are banded and we further generalize this structure considering quasiseparable data. In the last part of the thesis we study large-scale generalized Sylvester equations and, under some assumptions on the coefficient matrices, novel approximation spaces for their solution by projection are proposed

    7. Minisymposium on Gauss-type Quadrature Rules: Theory and Applications

    Get PDF

    Graph-based Semi-supervised Learning: Algorithms and Applications.

    Get PDF
    114 p.Graph-based semi-supervised learning have attracted large numbers of researchers and it is an important part of semi-supervised learning. Graph construction and semi-supervised embedding are two main steps in graph-based semi-supervised learning algorithms. In this thesis, we proposed two graph construction algorithms and two semi-supervised embedding algorithms. The main work of this thesis is summarized as follows:1. A new graph construction algorithm named Graph construction based on self-representativeness and Laplacian smoothness (SRLS) and several variants are proposed. Researches show that the coefficients obtained by data representation algorithms reflect the similarity between data samples and can be considered as a measurement of the similarity. This kind of measurement can be used for the weights of the edges between data samples in graph construction. Each column of the coefficient matrix obtained by data self-representation algorithms can be regarded as a new representation of original data. The new representations should have common features as the original data samples. Thus, if two data samples are close to each other in the original space, the corresponding representations should be highly similar. This constraint is called Laplacian smoothness.SRLS graph is based on l2-norm minimized data self-representation and Laplacian smoothness. Since the representation matrix obtained by l2 minimization is dense, a two phrase SRLS method (TPSRLS) is proposed to increase the sparsity of graph matrix. By extending the linear space to Hilbert space, two kernelized versions of SRLS are proposed. Besides, a direct solution to kernelized SRLS algorithm is also introduced.2. A new sparse graph construction algorithm named Sparse graph with Laplacian smoothness (SGLS) and several variants are proposed. SGLS graph algorithm is based on sparse representation and use Laplacian smoothness as a constraint (SGLS). A kernelized version of the SGLS algorithm and a direct solution to kernelized SGLS algorithm are also proposed. 3. SPP is a successful unsupervised learning method. To extend SPP to a semi-supervised embedding method, we introduce the idea of in-class constraints in CGE into SPP and propose a new semi-supervised method for data embedding named Constrained Sparsity Preserving Embedding (CSPE).4. The weakness of CSPE is that it cannot handle the new coming samples which means a cascade regression should be performed after the non-linear mapping is obtained by CSPE over the whole training samples. Inspired by FME, we add a regression term in the objective function to obtain an approximate linear projection simultaneously when non-linear embedding is estimated and proposed Flexible Constrained Sparsity Preserving Embedding (FCSPE).Extensive experiments on several datasets (including facial images, handwriting digits images and objects images) prove that the proposed algorithms can improve the state-of-the-art results

    Geometric methods on low-rank matrix and tensor manifolds

    Get PDF
    In this chapter we present numerical methods for low-rank matrix and tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces. We focus on two types of problems: The first are optimization problems, like matrix and tensor completion, solving linear systems and eigenvalue problems. Such problems can be solved by numerical optimization for manifolds, called Riemannian optimization methods. We will explain the basic elements of differential geometry in order to apply such methods efficiently to rank constrained matrix and tensor spaces. The second type of problem is ordinary differential equations, defined on matrix and tensor spaces. We show how their solution can be approximated by the dynamical low-rank principle, and discuss several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors

    Information Geometry

    Get PDF
    This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience
    corecore