104 research outputs found

    Matching-based preprocessing algorithms to the solution of saddle-point problems in large-scale nonconvex interior-point optimization

    Get PDF
    Interior-point methods are among the most efficient approaches for solving large-scale nonlinear programming problems. At the core of these methods, highly ill-conditioned symmetric saddle-point problems have to be solved. We present combinatorial methods to preprocess these matrices in order to establish more favorable numerical properties for the subsequent factorization. Our approach is based on symmetric weighted matchings and is used in a sparse direct LDL T factorization method where the pivoting is restricted to static supernode data structures. In addition, we will dynamically expand the supernode data structure in cases where additional fill-in helps to select better numerical pivot elements. This technique can be seen as an alternative to the more traditional threshold pivoting techniques. We demonstrate the competitiveness of this approach within an interior-point method on a large set of test problems from the CUTE and COPS sets, as well as large optimal control problems based on partial differential equations. The largest nonlinear optimization problem solved has more than 12 million variables and 6 million constraint

    Preordering saddle-point systems for sparse LDLT factorization without pivoting

    Get PDF
    This paper focuses on efficiently solving large sparse symmetric indefinite systems of linear equations in saddle‐point form using a fill‐reducing ordering technique with a direct solver. Row and column permutations partition the saddle‐point matrix into a block structure constituting a priori pivots of order 1 and 2. The partitioned matrix is compressed by treating each nonzero block as a single entry, and a fill‐reducing ordering is applied to the corresponding compressed graph. It is shown that, provided the saddle‐point matrix satisfies certain criteria, a block LDLT factorization can be computed using the resulting pivot sequence without modification. Numerical results for a range of problems from practical applications using a modern sparse direct solver are presented to illustrate the effectiveness of the approach

    BootCMatch: A software package for bootstrap AMG based on graph weighted matching

    Get PDF
    This article has two main objectives: one is to describe some extensions of an adaptive Algebraic Multigrid (AMG) method of the form previously proposed by the first and third authors, and a second one is to present a new software framework, named BootCMatch, which implements all the components needed to build and apply the described adaptive AMG both as a stand-alone solver and as a preconditioner in a Krylov method. The adaptive AMG presented is meant to handle general symmetric and positive definite (SPD) sparse linear systems, without assuming any a priori information of the problem and its origin; the goal of adaptivity is to achieve a method with a prescribed convergence rate. The presented method exploits a general coarsening process based on aggregation of unknowns, obtained by a maximum weight matching in the adjacency graph of the system matrix. More specifically, a maximum product matching is employed to define an effective smoother subspace (complementary to the coarse space), a process referred to as compatible relaxation, at every level of the recursive two-level hierarchical AMG process. Results on a large variety of test cases and comparisons with related work demonstrate the reliability and efficiency of the method and of the software

    Combinatorial problems in solving linear systems

    Get PDF
    42 pages, available as LIP research report RR-2009-15Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects. As the core of many of today's numerical linear algebra computations consists of the solution of sparse linear system by direct or iterative methods, we survey some combinatorial problems, ideas, and algorithms relating to these computations. On the direct methods side, we discuss issues such as matrix ordering; bipartite matching and matrix scaling for better pivoting; task assignment and scheduling for parallel multifrontal solvers. On the iterative method side, we discuss preconditioning techniques including incomplete factorization preconditioners, support graph preconditioners, and algebraic multigrid. In a separate part, we discuss the block triangular form of sparse matrices

    A max-plus approach to incomplete Cholesky factorization preconditioners

    Get PDF
    We present a new method for constructing incomplete Cholesky factorization preconditioners for use in solving large sparse symmetric positive-definite linear systems. This method uses max-plus algebra to predict the positions of the largest entries in the Cholesky factor and then uses these positions as the sparsity pattern for the preconditioner. Our method builds on the max-plus incomplete LU factorization preconditioner recently proposed in [J. Hook and F. Tisseur, Incomplete LU preconditioner based on max-plus approximation of LU factorization, MIMS Eprint 2016.47, Manchester, 2016] but applied to symmetric positive-definite matrices, which comprise an important special case for the method and its application. An attractive feature of our approach is that the sparsity pattern of each column of the preconditioner can be computed in parallel. Numerical comparisons are made with other incomplete Cholesky factorization preconditioners using problems from a range of practical applications. We demonstrate that the new preconditioner can outperform traditional level-based preconditioners and offer a parallel alternative to a serial limited-memory based approach

    Sparse Recovery of Positive Signals with Minimal Expansion

    Get PDF
    We investigate the sparse recovery problem of reconstructing a high-dimensional non-negative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor networks, where dense measurements are not practically feasible. One possible construction uses the adjacency matrices of expander graphs, which often leads to recovery algorithms much more efficient than ℓ1\ell_1 minimization. However, to date, constructions based on expanders have required very high expansion coefficients which can potentially make the construction of such graphs difficult and the size of the recoverable sets small. In this paper, we construct sparse measurement matrices for the recovery of non-negative vectors, using perturbations of the adjacency matrix of an expander graph with much smaller expansion coefficient. We present a necessary and sufficient condition for ℓ1\ell_1 optimization to successfully recover the unknown vector and obtain expressions for the recovery threshold. For certain classes of measurement matrices, this necessary and sufficient condition is further equivalent to the existence of a "unique" vector in the constraint set, which opens the door to alternative algorithms to ℓ1\ell_1 minimization. We further show that the minimal expansion we use is necessary for any graph for which sparse recovery is possible and that therefore our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much faster than ℓ1\ell_1 optimization. Finally, we demonstrate through theoretical bounds, as well as simulation, that our method is robust to noise and approximate sparsity.Comment: 25 pages, submitted for publicatio

    Perturbation-Tolerant Structural Controllability for Linear Systems

    Full text link
    This paper proposes a novel notion named perturbation-tolerant structural controllability (PTSC) to study controllability preservation for a structured linear system under structured perturbations. To be precise, we consider a structured system whose entries can be classified into three categories: fixed zero entries, unknown generic entries whose values are fixed but unknown, and perturbed entries that can take arbitrary complex values. Such a system is PTSC if, for almost all values of the unknown generic entries in the parameter space, the corresponding controllable system realizations can preserve controllability under arbitrary complex-valued perturbations with their zero/nonzero structure prescribed by the perturbed entries. It is proven genericity exists in this notion, that is, depending on the structure of the structured system, for almost all of its controllable realizations, either there exists an addable structured perturbation prescribed by the perturbed entries so that the resulting system is uncontrollable, or there is not such a perturbation. We give a decomposition-based necessary and sufficient condition for a single-input linear system, ensuring PTSC, whose verification has polynomial time complexity; we then present some intuitive graph-theoretic conditions for PTSC. For the multi-input case, we provide some necessary conditions for PTSC. As an application, our results can serve as some feasibility conditions for the conventional structured controllability radius problems from a generic view.Comment: Fix some typos. arXiv admin note: substantial text overlap with arXiv:2103.1190

    Adaptive AMG with Coarsening Based on Compatible Weighted Matching

    Full text link
    • 

    corecore