270 research outputs found

    Foundational Factorization Algorithms for the Efficient Roundoff-Error-Free Solution of Optimization Problems

    Get PDF
    LU and Cholesky factorizations play a central role in solving linear and mixed-integer programs. In many documented cases, the round-off errors accrued during the construction and implementation of these factorizations cause the misclassification of suboptimal solutions as optimal and infeasible problems as feasible and vice versa. Such erroneous outputs bring the reliability of optimization solvers into question and, therefore, it is imperative to eliminate these round off errors altogether and to do so efficiently to ensure practicality. Firstly, this work introduces two round off-error-free factorizations (REF) constructed exclusively in integer arithmetic: the REF LU and Cholesky factorizations. Additionally, it develops supplementary integer-preserving substitution algorithms, thereby providing a complete tool set for solving systems of linear equations (SLEs) exactly and efficiently. An inherent property of the REF factorization algorithms is that their entries' bit-length--- i.e., the number of bits required for expression--- is bounded polynomially. Unlike the exact rational arithmetic methods used in practice, however, the algorithms herein presented do not require any greatest common divisor operations to guarantee this pivotal property. Secondly, this work derives various useful theoretical results and details computational tests to demonstrate that the REF factorization framework is considerably superior to the rational arithmetic LU factorization approach in computational performance and storage requirements. This is significant because the latter approach is the solution validation tool of choice of state-of-the-art exact linear programming solvers due to its ability to handle both numerically difficult and intricate problems. An additional theoretical contribution and further computational tests also demonstrate the predominance of the featured framework over Q-matrices, which comprise an alternative integer-preserving approach relying on the basis adjunct matrix. Thirdly, this work develops special algorithms for updating the REF factorizations. This is necessary because applying the traditional approach to the REF factorizations is inefficient in terms of entry growth and computational effort. In fact, these inefficiencies virtually wipe out all the computational savings commonly expected of factorization updates. Hence, the current work develops REF update algorithms that differ significantly from their traditional counterparts. The featured REF updates are column/row addition, deletion, and replacement

    Fast and accurate con-eigenvalue algorithm for optimal rational approximations

    Full text link
    The need to compute small con-eigenvalues and the associated con-eigenvectors of positive-definite Cauchy matrices naturally arises when constructing rational approximations with a (near) optimally small LL^{\infty} error. Specifically, given a rational function with nn poles in the unit disk, a rational approximation with mnm\ll n poles in the unit disk may be obtained from the mmth con-eigenvector of an n×nn\times n Cauchy matrix, where the associated con-eigenvalue λm>0\lambda_{m}>0 gives the approximation error in the LL^{\infty} norm. Unfortunately, standard algorithms do not accurately compute small con-eigenvalues (and the associated con-eigenvectors) and, in particular, yield few or no correct digits for con-eigenvalues smaller than the machine roundoff. We develop a fast and accurate algorithm for computing con-eigenvalues and con-eigenvectors of positive-definite Cauchy matrices, yielding even the tiniest con-eigenvalues with high relative accuracy. The algorithm computes the mmth con-eigenvalue in O(m2n)\mathcal{O}(m^{2}n) operations and, since the con-eigenvalues of positive-definite Cauchy matrices decay exponentially fast, we obtain (near) optimal rational approximations in O(n(logδ1)2)\mathcal{O}(n(\log\delta^{-1})^{2}) operations, where δ\delta is the approximation error in the LL^{\infty} norm. We derive error bounds demonstrating high relative accuracy of the computed con-eigenvalues and the high accuracy of the unit con-eigenvectors. We also provide examples of using the algorithm to compute (near) optimal rational approximations of functions with singularities and sharp transitions, where approximation errors close to machine precision are obtained. Finally, we present numerical tests on random (complex-valued) Cauchy matrices to show that the algorithm computes all the con-eigenvalues and con-eigenvectors with nearly full precision

    Error Analysis of the Cholesky QR-Based Block Orthogonalization Process for the One-Sided Block Jacobi SVD Algorithm

    Get PDF
    The one-sided block Jacobi method (OSBJ) has attracted attention as a fast and accurate algorithm for the singular value decomposition (SVD). The computational kernel of OSBJ is orthogonalization of a column block pair, which amounts to computing the SVD of this block pair. Hari proposes three methods for this partial SVD, and we found through numerical experiments that the variant named "V2", which is based on the Cholesky QR method, is the fastest variant and achieves satisfactory accuracy. While it is a good news from a practical viewpoint, it seems strange considering the well-known instability of the Cholesky QR method. In this paper, we perform a detailed error analysis of the V2 variant and explain why and when it can be used to compute the partial SVD accurately. Thus, our results provide a theoretical support for using the V2 variant safely in the OSBJ method

    Rigorous techniques for continuous constraint satisfaction problems

    Get PDF
    Diese Arbeit beschäftigt sich mit rigorosen Techniken für das Lösen kontinuierlicher Zulässigkeitsprobleme. Das heißt, wir suchen nach einem oder allen Punkte, genannt zulässige Punkte, die eine Familie von Gleichungen und/oder Ungleichungen erfüllen, die wir im Weiteren Nebenbedingungen nennen werden. Zahlreiche Anwendungen führen auf kontinuierliche Zulässigkeitsprobleme. Neue und bereits existierende moderne Methoden werden präsentiert und integriert in GloptLab, eine neue, leicht bedienbare Test- und Entwicklungsplattform zum Lösen quadratischer Zulässigkeitsprobleme. Der Lösungsalgorithmus beruht auf dem Grundprinzip von Branch-and-Prune und auf Filterung. Filterungsmethoden dienen zur Verkleinerung/Reduktion einer Box, definiert als das kartesische Produkt der Intervalle, die die Schranken an die Variablen festlegen. Um den Verlust zulässiger Punkte zu vermeiden, werden alle Fehlerabschätzungen rigoros mittels Intervallarithmetik und gerichteter Rundung durchgeführt. Das stellt sicher, dass alle Rechnungen auch in Gleitkommaarithmetik gültig sind. In der Doktorarbeit werden die folgenden Themen diskutiert: der mathematische Hintergrund, Algorithmen und Tests für Constraint-Propagation, strikt konvexe Einschließungen, lineare Relaxationen, das Berechnen, korrekte Benutzen und Verifizieren approximativ zulässiger Punkte, optimale Skalierung und diverse Hilfsmethoden. Insbesondere: - Constraint-Propagation basiert auf einer Folge von Schritten, die jeweils eine einzelne Nebenbedingung verwenden. Traditionelle Techniken werden durch eine spezielle quadratische Methode erweitert, die neue Verfahren für die Eliminierung bilinearer Einträge und für das Berechnen optimaler Einschließungen für separable quadratische Ausdrücke verwendet. - Eine quadratische Ungleichungsnebenbedingung, die eine positiv definite Hesse-Matrix besitzt, definiert ein Ellipsoid. Eine spezielle rundungsfehlerkontrollierte Version der Cholesky-Zerlegung wird verwendet, um die strikt konvexe quadratische Nebenbedingungen in Norm-Ungleichungen zu transformieren. Für diese ist es dann einfach, die Intervall-Hülle analytisch zu bestimmen. - Diverse Methoden für die Erzeugung linearer Relaxationen werden diskutiert, kombiniert und erweitert. Teilweise verbesserte, existierende und neue Verfahren für das rigorose Einschließen der Lösungsmenge linearer Systeme werden präsentiert. - Eine Vielzahl von Beispielen demonstrieren, dass die präsentierten Verfahren einander ergänzen. Außerdem zeigen sie, wie man Lösungsstrategien entwickelt, die Zulässigkeitsprobleme global und effizient lösen.This thesis contributes rigorous techniques for solving continuous constraint satisfaction problems, i.e., finding one or all points (called feasible points) satisfying a given family of equations and/or inequalities (called constraints). Many real word problems are continuous constraint satisfaction problems. New and old state of the art methods are presented, integrated in GloptLab, a new easy-to-use testing and development platform for solving quadratic constraint satisfaction problems. The basic solution principle is branch and prune and filtering. Filtering techniques tighten a box -- the Cartesian product of intervals defined by the bounds on the variables. In order to avoid a loss of feasible points, rigorous error estimation using interval arithmetic and directed rounding are used, to take care that all calculations are valid even though the calculations are performed in floating-point arithmetic only. Discussed are the mathematical background, algorithms and tests of constraint propagation, strictly convex enclosures, linear relaxations, finding, using and verifying approximately feasible points, optimal scaling and other auxiliary techniques. In particular: - Constraint propagation is based on a sequence of steps, each using a single constraint only. Traditional techniques are extended by special quadratic constraint propagation methods using new techniques for eliminating bilinear entries and finding optimal enclosures for separable quadratic expressions. - A quadratic inequality constraint with a positive definite Hessian defines an ellipsoid. A rounding error controlled version of the Cholesky factorization is used to transform a strictly convex quadratic constraint into a norm inequality, for which the interval hull is easy to compute analytically. - Different methods for computing linear relaxations are discussed, combined and extended. Partially improved and existing methods, as well as new approaches for rigorously enclosing the solution set of linear systems of inequalities are presented. - Numerous examples show that the above methods complement each other and how to create solution strategies that solve constraint satisfaction problems globally and efficiently

    On Algorithmic Variants of Parallel Gaussian Elimination: Comparison of Implementations in Terms of Performance and Numerical Properties

    Get PDF
    Gaussian elimination is a canonical linear algebra procedure for solving linear systems of equations. In the last few years, the algorithm received a lot of attention in an attempt to improve its parallel performance. This article surveys recent developments in parallel implementations of the Gaussian elimination. Five different flavors are investigated. Three of them are based on different strategies for pivoting: partial pivoting, incremental pivoting, and tournament pivoting. The fourth one replaces pivoting with the Random Butterfly Transformation, and finally, an implementation without pivoting is used as a performance baseline. The technique of iterative refinement is applied to recover numerical accuracy when necessary. All parallel implementations are produced using dynamic, superscalar, runtime scheduling and tile matrix layout. Results on two multi-socket multicore systems are presented. Performance and numerical accuracy is analyzed

    An algorithm for the solution of dynamic linear programs

    Get PDF
    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation scheme

    On Updating Preconditioners for the Iterative Solution of Linear Systems

    Full text link
    El tema principal de esta tesis es el desarrollo de técnicas de actualización de precondicionadores para resolver sistemas lineales de gran tamaño y dispersos Ax=b mediante el uso de métodos iterativos de Krylov. Se consideran dos tipos interesantes de problemas. En el primero se estudia la solución iterativa de sistemas lineales no singulares y antisimétricos, donde la matriz de coeficientes A tiene parte antisimétrica de rango bajo o puede aproximarse bien con una matriz antisimétrica de rango bajo. Sistemas como este surgen de la discretización de PDEs con ciertas condiciones de frontera de Neumann, la discretización de ecuaciones integrales y métodos de puntos interiores, por ejemplo, el problema de Bratu y la ecuación integral de Love. El segundo tipo de sistemas lineales considerados son problemas de mínimos cuadrados (LS) que se resuelven considerando la solución del sistema equivalente de ecuaciones normales. Concretamente, consideramos la solución de problemas LS modificados y de rango incompleto. Por problema LS modificado se entiende que el conjunto de ecuaciones lineales se actualiza con alguna información nueva, se agrega una nueva variable o, por el contrario, se elimina alguna información o variable del conjunto. En los problemas LS de rango deficiente, la matriz de coeficientes no tiene rango completo, lo que dificulta el cálculo de una factorización incompleta de las ecuaciones normales. Los problemas LS surgen en muchas aplicaciones a gran escala de la ciencia y la ingeniería como, por ejemplo, redes neuronales, programación lineal, sismología de exploración o procesamiento de imágenes. Los precondicionadores directos para métodos iterativos usados habitualmente son las factorizaciones incompletas LU, o de Cholesky cuando la matriz es simétrica definida positiva. La principal contribución de esta tesis es el desarrollo de técnicas de actualización de precondicionadores. Básicamente, el método consiste en el cálculo de una descomposición incompleta para un sistema lineal aumentado equivalente, que se utiliza como precondicionador para el problema original. El estudio teórico y los resultados numéricos presentados en esta tesis muestran el rendimiento de la técnica de precondicionamiento propuesta y su competitividad en comparación con otros métodos disponibles en la literatura para calcular precondicionadores para los problemas estudiados.The main topic of this thesis is updating preconditioners for solving large sparse linear systems Ax=b by using Krylov iterative methods. Two interesting types of problems are considered. In the first one is studied the iterative solution of non-singular, non-symmetric linear systems where the coefficient matrix A has a skew-symmetric part of low-rank or can be well approximated with a skew-symmetric low-rank matrix. Systems like this arise from the discretization of PDEs with certain Neumann boundary conditions, the discretization of integral equations as well as path following methods, for example, the Bratu problem and the Love's integral equation. The second type of linear systems considered are least squares (LS) problems that are solved by considering the solution of the equivalent normal equations system. More precisely, we consider the solution of modified and rank deficient LS problems. By modified LS problem, it is understood that the set of linear relations is updated with some new information, a new variable is added or, contrarily, some information or variable is removed from the set. Rank deficient LS problems are characterized by a coefficient matrix that has not full rank, which makes difficult the computation of an incomplete factorization of the normal equations. LS problems arise in many large-scale applications of the science and engineering as for instance neural networks, linear programming, exploration seismology or image processing. Usually, incomplete LU or incomplete Cholesky factorization are used as preconditioners for iterative methods. The main contribution of this thesis is the development of a technique for updating preconditioners by bordering. It consists in the computation of an approximate decomposition for an equivalent augmented linear system, that is used as preconditioner for the original problem. The theoretical study and the results of the numerical experiments presented in this thesis show the performance of the preconditioner technique proposed and its competitiveness compared with other methods available in the literature for computing preconditioners for the problems studied.El tema principal d'esta tesi és actualitzar precondicionadors per a resoldre sistemes lineals grans i buits Ax=b per mitjà de l'ús de mètodes iteratius de Krylov. Es consideren dos tipus interessants de problemes. En el primer s'estudia la solució iterativa de sistemes lineals no singulars i antisimètrics, on la matriu de coeficients A té una part antisimètrica de baix rang, o bé pot aproximar-se amb una matriu antisimètrica de baix rang. Sistemes com este sorgixen de la discretització de PDEs amb certes condicions de frontera de Neumann, la discretització d'equacions integrals i mètodes de punts interiors, per exemple, el problema de Bratu i l'equació integral de Love. El segon tipus de sistemes lineals considerats, són problemes de mínims quadrats (LS) que es resolen considerant la solució del sistema equivalent d'equacions normals. Concretament, considerem la solució de problemes de LS modificats i de rang incomplet. Per problema LS modificat, s'entén que el conjunt d'equacions lineals s'actualitza amb alguna informació nova, s'agrega una nova variable o, al contrari, s'elimina alguna informació o variable del conjunt. En els problemes LS de rang deficient, la matriu de coeficients no té rang complet, la qual cosa dificultata el calcul d'una factorització incompleta de les equacions normals. Els problemes LS sorgixen en moltes aplicacions a gran escala de la ciència i l'enginyeria com, per exemple, xarxes neuronals, programació lineal, sismologia d'exploració o processament d'imatges. Els precondicionadors directes per a mètodes iteratius utilitzats més a sovint són les factoritzacions incompletes tipus ILU, o la factorització incompleta de Cholesky quan la matriu és simètrica definida positiva. La principal contribució d'esta tesi és el desenvolupament de tècniques d'actualització de precondicionadors. Bàsicament, el mètode consistix en el càlcul d'una descomposició incompleta per a un sistema lineal augmentat equivalent, que s'utilitza com a precondicionador pel problema original. L'estudi teòric i els resultats numèrics presentats en esta tesi mostren el rendiment de la tècnica de precondicionament proposta i la seua competitivitat en comparació amb altres mètodes disponibles en la literatura per a calcular precondicionadors per als problemes considerats.Guerrero Flores, DJ. (2018). On Updating Preconditioners for the Iterative Solution of Linear Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10492
    corecore