213 research outputs found

    An orthogonally based pivoting transformation of matrices and some applications

    Get PDF
    In this paper we discuss the power of a pivoting transformation introduced by Castillo, Cobo, Jubete, andPruned a [Orthogonal Sets and Polar Methods in Linear Algebra: Applications to Matrix Calculations, Systems of Equations and Inequalities, and Linear Programming, John Wiley, New York, 1999] andits multiple applications. The meaning of each sequential tableau appearing during the pivoting process is interpreted. It is shown that each tableau of the process corresponds to the inverse of a row modified matrix and contains the generators of the linear subspace orthogonal to a set of vectors andits complement. This transformation, which is basedon the orthogonality concept, allows us to solve many problems of linear algebra, such as calculating the inverse and the determinant of a matrix, updating the inverse or the determinant of a matrix after changing a row (column), determining the rank of a matrix, determining whether or not a set of vectors is linearly independent, obtaining the intersection of two linear subspaces, solving systems of linear equations, etc. When the process is appliedto inverting a matrix andcalculating its determinant, not only is the inverse of the final matrix obtained, but also the inverses and the determinants of all its block main diagonal matrices, all without extra computations

    A distributed-memory package for dense Hierarchically Semi-Separable matrix computations using randomization

    Full text link
    We present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable representations (HSS). Such matrices appear in many applications, e.g., finite element methods, boundary element methods, etc. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, relies on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. This work is part of a more global effort, the STRUMPACK (STRUctured Matrices PACKage) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver

    Fast linear algebra is stable

    Full text link
    In an earlier paper, we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nn-by-nn matrices can be done by any algorithm in O(nω+η)O(n^{\omega + \eta}) operations for any η>0\eta > 0, then it can be done stably in O(nω+η)O(n^{\omega + \eta}) operations for any η>0\eta > 0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in O(nω+η)O(n^{\omega + \eta}) operations.Comment: 26 pages; final version; to appear in Numerische Mathemati

    Two dimensional search algorithms for linear programming

    Get PDF
    Linear programming is one of the most important classes of optimization problems. These mathematical models have been used by academics and practitioners to solve numerous real world applications. Quickly solving linear programs impacts decision makers from both the public and private sectors. Substantial research has been performed to solve this class of problems faster, and the vast majority of the solution techniques can be categorized as one dimensional search algorithms. That is, these methods successively move from one solution to another solution by solving a one dimensional subspace linear program at each iteration. This dissertation proposes novel algorithms that move between solutions by repeatedly solving a two dimensional subspace linear program. Computational experiments demonstrate the potential of these newly developed algorithms and show an average improvement of nearly 25% in solution time when compared to the corresponding one dimensional search version. This dissertation\u27s research creates the core concept of these two dimensional search algorithms, which is a fast technique to determine an optimal basis and an optimal solution to linear programs with only two variables. This method, called the slope algorithm, compares the slope formed by the objective function with the slope formed by each constraint to determine a pair of constraints that intersect at an optimal basis and an optimal solution. The slope algorithm is implemented within a simplex framework to perform two dimensional searches. This results in the double pivot simplex method. Differently than the well-known simplex method, the double pivot simplex method simultaneously pivots up to two basic variables with two nonbasic variables at each iteration. The theoretical computational complexity of the double pivot simplex method is identical to the simplex method. Computational results show that this new algorithm reduces the number of pivots to solve benchmark instances by approximately 40% when compared to the classical implementation of the simplex method, and 20% when compared to the primal simplex implementation of CPLEX, a high performance mathematical programming solver. Solution times of some random linear programs are also improved by nearly 25% on average. This dissertation also presents a novel technique, called the ratio algorithm, to find an optimal basis and an optimal solution to linear programs with only two constraints. When the ratio algorithm is implemented within a simplex framework to perform two dimensional searches, it results in the double pivot dual simplex method. In this case, the double pivot dual simplex method behaves similarly to the dual simplex method, but two variables are exchanged at every step. Two dimensional searches are also implemented within an interior point framework. This dissertation creates a set of four two dimensional search interior point algorithms derived from primal and dual affine scaling and logarithmic barrier search directions. Each iteration of these techniques quickly solves a two dimensional subspace linear program formed by the intersection of two search directions and the feasible region of the linear program. Search directions are derived by orthogonally partitioning the objective function vector, which allows these novel methods to improve the objective function value at each step by at least as much as the corresponding one dimensional search version. Computational experiments performed on benchmark linear programs demonstrate that these two dimensional search interior point algorithms improve the average solution time by approximately 12% and the average number of iterations by 15%. In conclusion, this dissertation provides a change of paradigm in linear programming optimization algorithms. Implementing two dimensional searches within both a simplex and interior point framework typically reduces the computational time and number of iterations to solve linear programs. Furthermore, this dissertation sets the stage for future research topics in multidimensional search algorithms to solve not only linear programs but also other critical classes of optimization methods. Consequently, this dissertation\u27s research can become one of the first steps to change how commercial and open source mathematical programming software will solve optimization problems

    CAUCHY-LIKE PRECONDITIONERS FOR 2-DIMENSIONAL ILL-POSED PROBLEMS

    Get PDF
    Ill-conditioned matrices with block Toeplitz, Toeplitz block (BTTB) structure arise from the discretization of certain ill-posed problems in signal and image processing. We use a preconditioned conjugate gradient algorithm to compute a regularized solution to this linear system given noisy data. Our preconditioner is a Cauchy-like block diagonal approximation to an orthogonal transformation of the BTTB matrix. We show the preconditioner has desirable properties when the kernel of the ill-posed problem is smooth: the largest singular values of the preconditioned matrix are clustered around one, the smallest singular values remain small, and the subspaces corresponding to the largest and smallest singular values, respectively, remain unmixed. For a system involving npnp variables, the preconditioned algorithm costs only O(np(lgn+lgp))O(np (\lg n + \lg p)) operations per iteration. We demonstrate the effectiveness of the preconditioner on three examples

    Ideology and existence of 50%-majority equilibria in multidimensional spatial voting models

    Get PDF
    When aggregating individual preferences through the majority rule in an n-dimensional spatial voting model, the `worst-case' scenario is a social choice configuration where no political equilibrium exists unless a super majority rate as high as 1-1/n is adopted. In this paper we assume that a lower d-dimensional (d smaller than n) linear map spans the possible candidates' platforms. These d `ideological' dimensions imply some linkages between the n political issues. We randomize over these linkages and show that there almost surely exists a 50%-majority equilibria in the above worst-case scenario, when n grows to infinity. Moreover the equilibrium is the mean voter. The speed of convergence (toward 50%) of the super majority rate guaranteeing existence of equilibrium is computed for d=1 and 2.
    corecore