11 research outputs found

    Greedy low-rank algorithm for spatial connectome regression

    Get PDF
    Recovering brain connectivity from tract tracing data is an important computational problem in the neurosciences. Mesoscopic connectome reconstruction was previously formulated as a structured matrix regression problem (Harris et al., 2016), but existing techniques do not scale to the whole-brain setting. The corresponding matrix equation is challenging to solve due to large scale, ill-conditioning, and a general form that lacks a convergent splitting. We propose a greedy low-rank algorithm for connectome reconstruction problem in very high dimensions. The algorithm approximates the solution by a sequence of rank-one updates which exploit the sparse and positive definite problem structure. This algorithm was described previously (Kressner and Sirkovi\'c, 2015) but never implemented for this connectome problem, leading to a number of challenges. We have had to design judicious stopping criteria and employ efficient solvers for the three main sub-problems of the algorithm, including an efficient GPU implementation that alleviates the main bottleneck for large datasets. The performance of the method is evaluated on three examples: an artificial "toy" dataset and two whole-cortex instances using data from the Allen Mouse Brain Connectivity Atlas. We find that the method is significantly faster than previous methods and that moderate ranks offer good approximation. This speedup allows for the estimation of increasingly large-scale connectomes across taxa as these data become available from tracing experiments. The data and code are available online

    Optimality Properties of Galerkin and Petrov-Galerkin Methods for Linear Matrix Equations

    Get PDF
    none2siGalerkin and Petrov–Galerkin methods are some of the most successful solution procedures in numerical analysis. Their popularity is mainly due to the optimality properties of their approximate solution. We show that these features carry over to the (Petrov-) Galerkin methods applied for the solution of linear matrix equations. Some novel considerations about the use of Galerkin and Petrov–Galerkin schemes in the numerical treatment of general linear matrix equations are expounded and the use of constrained minimization techniques in the Petrov–Galerkin framework is proposed.nonePalitta D.; Simoncini V.Palitta D.; Simoncini V

    Optimality properties of Galerkin and Petrov-Galerkin methods for linear matrix equations

    Get PDF
    Galerkin and Petrov-Galerkin methods are some of the most successful solution procedures in numerical analysis. Their popularity is mainly due to the optimality properties of their approximate solution. We show that these features carry over to the (Petrov-)Galerkin methods applied for the solution of linear matrix equations. Some novel considerations about the use of Galerkin and Petrov-Galerkin schemes in the numerical treatment of general linear matrix equations are expounded and the use of constrained minimization techniques in the Petrov-Galerkin framework is proposed

    On the Convergence of Krylov Methods with Low-Rank Truncations

    No full text

    The Sherman–Morrison–Woodbury formula for generalized linear matrix equations and applications

    Get PDF
    We discuss the use of a matrix-oriented approach for numerically solving the dense matrix equation AX + XAT + M1XN1 + … + MℓXNℓ = F, with ℓ ≥ 1, and Mi, Ni, i = 1, …, ℓ of low rank. The approach relies on the Sherman–Morrison–Woodbury formula formally defined in the vectorized form of the problem, but applied in the matrix setting. This allows one to solve medium size dense problems with computational costs and memory requirements dramatically lower than with a Kronecker formulation. Application problems leading to medium size equations of this form are illustrated and the performance of the matrix-oriented method is reported. The application of the procedure as the core step in the solution of the large-scale problem is also shown. In addition, a new explicit method for linear tensor equations is proposed, that uses the discussed matrix equation procedure as a key building block

    Greedy Low-Rank Algorithm for Spatial Connectome Regression

    No full text

    Truncated low-rank methods for solving general linear matrix equations

    No full text
    This work is concerned with the numerical solution of large-scale linear matrix equations A1XB1T++AKXBKT=C. The most straightforward approach computes XRmxn from the solution of an mn x mn linear system, typically limiting the feasible values of m,n to a few hundreds at most. Our new approach exploits the fact that X can often be well approximated by a low-rank matrix. It combines greedy low-rank techniques with Galerkin projection and preconditioned gradients. In turn, only linear systems of size m x m and n x n need to be solved. Moreover, these linear systems inherit the sparsity of the coefficient matrices, which allows to address linear matrix equations as large as m = n = O(10(5)). Numerical experiments demonstrate that the proposed methods perform well for generalized Lyapunov equations. Even for the case of standard Lyapunov equations, our methods can be advantageous, as we do not need to assume that C has low rank. Copyright (c) 2015 John Wiley & Sons, Ltd
    corecore