61,454 research outputs found

    On the ordering of sparse linear systems

    Get PDF
    AbstractIn this paper we consider the algorithms for transforming an n × n sparse matrix A into another matrix B such that Gaussian elimination applied to B takes time asymptotically less than n3. These algorithms take the sparse matrix A as input, and return a pair of permutation matrices P, Q such that B = PAQ has a small bandwidth, or some other desirable form. We study the average effectiveness of these algorithms by using random matrices with Θ(n) nonzero elements. We prove that with high probability these algorithms cannot produce a reduction of the asymptotic cost of the standard Gaussian elimination algorithm.We also study the effectiveness of these algorithms for ordering very sparse matrices. We show that there exist matrices with 3n nonzeros for which reordering rows and columns does not reduce the asymptotic cost of Gaussian elimination. We also prove that each matrix with at most two nonzeros in each row and in each column, can be transformed into a banded matrix with bandwidth five

    Sparse Gr\"obner Bases: the Unmixed Case

    Get PDF
    Toric (or sparse) elimination theory is a framework developped during the last decades to exploit monomial structures in systems of Laurent polynomials. Roughly speaking, this amounts to computing in a \emph{semigroup algebra}, \emph{i.e.} an algebra generated by a subset of Laurent monomials. In order to solve symbolically sparse systems, we introduce \emph{sparse Gr\"obner bases}, an analog of classical Gr\"obner bases for semigroup algebras, and we propose sparse variants of the F5F_5 and FGLM algorithms to compute them. Our prototype "proof-of-concept" implementation shows large speed-ups (more than 100 for some examples) compared to optimized (classical) Gr\"obner bases software. Moreover, in the case where the generating subset of monomials corresponds to the points with integer coordinates in a normal lattice polytope PRn\mathcal P\subset\mathbb R^n and under regularity assumptions, we prove complexity bounds which depend on the combinatorial properties of P\mathcal P. These bounds yield new estimates on the complexity of solving 00-dim systems where all polynomials share the same Newton polytope (\emph{unmixed case}). For instance, we generalize the bound min(n1,n2)+1\min(n_1,n_2)+1 on the maximal degree in a Gr\"obner basis of a 00-dim. bilinear system with blocks of variables of sizes (n1,n2)(n_1,n_2) to the multilinear case: nimax(ni)+1\sum n_i - \max(n_i)+1. We also propose a variant of Fr\"oberg's conjecture which allows us to estimate the complexity of solving overdetermined sparse systems.Comment: 20 pages, Corollary 6.1 has been corrected, ISSAC 2014, Kobe : Japan (2014

    Iterative solutions to the steady state density matrix for optomechanical systems

    Get PDF
    We present a sparse matrix permutation from graph theory that gives stable incomplete Lower-Upper (LU) preconditioners necessary for iterative solutions to the steady state density matrix for quantum optomechanical systems. This reordering is efficient, adding little overhead to the computation, and results in a marked reduction in both memory and runtime requirements compared to other solution methods, with performance gains increasing with system size. Either of these benchmarks can be tuned via the preconditioner accuracy and solution tolerance. This reordering optimizes the condition number of the approximate inverse, and is the only method found to be stable at large Hilbert space dimensions. This allows for steady state solutions to otherwise intractable quantum optomechanical systems.Comment: 10 pages, 5 figure

    Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM

    Get PDF
    Sparsity has been widely recognized as crucial for efficient optimization in graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect the set of incorporated measurements, many methods for sparsification have been proposed in hopes of reducing computation. These methods often focus narrowly on reducing edge count without regard for structure at a global level. Such structurally-naive techniques can fail to produce significant computational savings, even after aggressive pruning. In contrast, simple heuristics such as measurement decimation and keyframing are known empirically to produce significant computation reductions. To demonstrate why, we propose a quantitative metric called elimination complexity (EC) that bridges the existing analytic gap between graph structure and computation. EC quantifies the complexity of the primary computational bottleneck: the factorization step of a Gauss-Newton iteration. Using this metric, we show rigorously that decimation and keyframing impose favorable global structures and therefore achieve computation reductions on the order of r2/9r^2/9 and r3r^3, respectively, where rr is the pruning rate. We additionally present numerical results showing EC provides a good approximation of computation in both batch and incremental (iSAM2) optimization and demonstrate that pruning methods promoting globally-efficient structure outperform those that do not.Comment: Pre-print accepted to ICRA 201
    corecore