42 research outputs found

    Greedy low-rank algorithm for spatial connectome regression

    Get PDF
    Recovering brain connectivity from tract tracing data is an important computational problem in the neurosciences. Mesoscopic connectome reconstruction was previously formulated as a structured matrix regression problem (Harris et al., 2016), but existing techniques do not scale to the whole-brain setting. The corresponding matrix equation is challenging to solve due to large scale, ill-conditioning, and a general form that lacks a convergent splitting. We propose a greedy low-rank algorithm for connectome reconstruction problem in very high dimensions. The algorithm approximates the solution by a sequence of rank-one updates which exploit the sparse and positive definite problem structure. This algorithm was described previously (Kressner and Sirkovi\'c, 2015) but never implemented for this connectome problem, leading to a number of challenges. We have had to design judicious stopping criteria and employ efficient solvers for the three main sub-problems of the algorithm, including an efficient GPU implementation that alleviates the main bottleneck for large datasets. The performance of the method is evaluated on three examples: an artificial "toy" dataset and two whole-cortex instances using data from the Allen Mouse Brain Connectivity Atlas. We find that the method is significantly faster than previous methods and that moderate ranks offer good approximation. This speedup allows for the estimation of increasingly large-scale connectomes across taxa as these data become available from tracing experiments. The data and code are available online

    On the Convergence of Krylov Methods with Low-Rank Truncations

    No full text

    Optimality properties of Galerkin and Petrov-Galerkin methods for linear matrix equations

    Get PDF
    Galerkin and Petrov-Galerkin methods are some of the most successful solution procedures in numerical analysis. Their popularity is mainly due to the optimality properties of their approximate solution. We show that these features carry over to the (Petrov-)Galerkin methods applied for the solution of linear matrix equations. Some novel considerations about the use of Galerkin and Petrov-Galerkin schemes in the numerical treatment of general linear matrix equations are expounded and the use of constrained minimization techniques in the Petrov-Galerkin framework is proposed

    Optimality Properties of Galerkin and Petrov-Galerkin Methods for Linear Matrix Equations

    Get PDF
    none2siGalerkin and Petrov–Galerkin methods are some of the most successful solution procedures in numerical analysis. Their popularity is mainly due to the optimality properties of their approximate solution. We show that these features carry over to the (Petrov-) Galerkin methods applied for the solution of linear matrix equations. Some novel considerations about the use of Galerkin and Petrov–Galerkin schemes in the numerical treatment of general linear matrix equations are expounded and the use of constrained minimization techniques in the Petrov–Galerkin framework is proposed.nonePalitta D.; Simoncini V.Palitta D.; Simoncini V

    Solving rank structured Sylvester and Lyapunov equations

    Full text link
    We consider the problem of efficiently solving Sylvester and Lyapunov equations of medium and large scale, in case of rank-structured data, i.e., when the coefficient matrices and the right-hand side have low-rank off-diagonal blocks. This comprises problems with banded data, recently studied by Haber and Verhaegen in "Sparse solution of the Lyapunov equation for large-scale interconnected systems", Automatica, 2016, and by Palitta and Simoncini in "Numerical methods for large-scale Lyapunov equations with symmetric banded data", SISC, 2018, which often arise in the discretization of elliptic PDEs. We show that, under suitable assumptions, the quasiseparable structure is guaranteed to be numerically present in the solution, and explicit novel estimates of the numerical rank of the off-diagonal blocks are provided. Efficient solution schemes that rely on the technology of hierarchical matrices are described, and several numerical experiments confirm the applicability and efficiency of the approaches. We develop a MATLAB toolbox that allows easy replication of the experiments and a ready-to-use interface for the solvers. The performances of the different approaches are compared, and we show that the new methods described are efficient on several classes of relevant problems

    Matrix Equation Techniques for Certain Evolutionary Partial Differential Equations

    No full text
    We show that the discrete operator stemming from time-space discretization of evolutionary partial differential equations can be represented in terms of a single Sylvester matrix equation. A novel solution strategy that combines projection techniques with the full exploitation of the entry-wise structure of the involved coefficient matrices is proposed. The resulting scheme is able to efficiently solve problems with a tremendous number of degrees of freedom while maintaining a low storage demand as illustrated in several numerical examples

    A Galerkin Method for Large-scale Autonomous Differential Riccati Equations based on the Loewner Partial Order

    Get PDF

    Numerical solution of large-scale linear matrix equations

    Get PDF
    We are interested in the numerical solution of large-scale linear matrix equations. In particular, due to their occurrence in many applications, we study the so-called Sylvester and Lyapunov equations. A characteristic aspect of the large-scale setting is that although data are sparse, the solution is in general dense so that storing it may be unfeasible. Therefore, it is necessary that the solution allows for a memory-saving approximation that can be cheaply stored. An extensive literature treats the case of the aforementioned equations with low-rank right-hand side. This assumption, together with certain hypotheses on the spectral distribution of the matrix coefficients, is a sufficient condition for proving a fast decay in the singular values of the solution. This decay motivates the search for a low-rank approximation so that only low-rank matrices are actually computed and stored remarkably reducing the storage demand. This is the task of the so-called low-rank methods and a large amount of work in this direction has been carried out in the last years. Projection methods have been shown to be among the most effective low-rank methods and in the first part of this thesis we propose some computational enhanchements of the classical algorithms. The case of equations with not necessarily low rank right-hand side has not been deeply analyzed so far and efficient methods are still lacking in the literature. In this thesis we aim to significantly contribute to this open problem by introducing solution methods for this kind of equations. In particular, we address the case when the coefficient matrices and the right-hand side are banded and we further generalize this structure considering quasiseparable data. In the last part of the thesis we study large-scale generalized Sylvester equations and, under some assumptions on the coefficient matrices, novel approximation spaces for their solution by projection are proposed
    corecore