166 research outputs found

    Improved Primal Simplex: A More General Theoretical Framework and an Extended Experimental Analysis

    Get PDF
    International audienceIn this article, we propose a general framework for an algorithm derived from the primal simplex that guarantees a strict improvement in the objective after each iteration. Our approach relies on the identification of compatible variables that ensure a nondegenerate iteration if pivoted into the basis. The problem of finding a strict improvement in the objective function is proved to be equivalent to two smaller problems respectively focusing on compatible and incompatible variables. We then show that the improved primal simplex (IPS) of Elhallaoui et al. is a particular implementation of this generic theoretical framework. The resulting new description of IPS naturally emphasizes what should be considered as necessary adaptations of the framework versus specific implementation choices. This provides original insight into IPS that allows for the identification of weaknesses and potential alternative choices that would extend the efficiency of the method to a wider set of problems. We perform experimental tests on an extended collection of data sets including instances of Mittelmann's benchmark for linear programming. The results confirm the excellent potential of IPS and highlight some of its limits while showing a path toward an improved implementation of the generic algorithm

    The positive edge pricing rule for the dual simplex

    Get PDF
    International audienceIn this article, we develop the two-dimensional positive edge criterion for the dual simplex. This work extends a similar pricing rule implemented by Towhidi et al. [24] to reduce the negative effects of degeneracy in the primal simplex. In the dual simplex, degeneracy occurs when nonbasic variables have a zero reduced cost, and it may lead to pivots that do not improve the objective value. We analyze dual degeneracy to characterize a particular set of dual compatible variables such that if any of them is selected to leave the basis the pivot will be nondegenerate. The dual positive edge rule can be used to modify any pivot selection rule so as to prioritize compatible variables. The expected effect is to reduce the number of pivots during the solution of degenerate problems with the dual simplex. For the experiments, we implement the positive edge rule within the dual simplex of the COIN-OR LP solver, and combine it with both the dual Dantzig and the dual steepest edge criteria. We test our implementation on 62 instances from four well-known benchmarks for linear programming. The results show that the dual positive edge rule significantly improves on the classical pricing rules

    Time-Varying Semidefinite Programming: Path Following a Burer-Monteiro Factorization

    Full text link
    We present an online algorithm for time-varying semidefinite programs (TV-SDPs), based on the tracking of the solution trajectory of a low-rank matrix factorization, also known as the Burer-Monteiro factorization, in a path-following procedure. There, a predictor-corrector algorithm solves a sequence of linearized systems. This requires the introduction of a horizontal space constraint to ensure the local injectivity of the low-rank factorization. The method produces a sequence of approximate solutions for the original TV-SDP problem, for which we show that they stay close to the optimal solution path if properly initialized. Numerical experiments for a time-varying max-cut SDP relaxation demonstrate the computational advantages of the proposed method for tracking TV-SDPs in terms of runtime compared to off-the-shelf interior point methods.Comment: 24 pages, 3 figure

    Approximate quantum error correction for generalized amplitude damping errors

    Full text link
    We present analytic estimates of the performances of various approximate quantum error correction schemes for the generalized amplitude damping (GAD) qubit channel. Specifically, we consider both stabilizer and nonadditive quantum codes. The performance of such error-correcting schemes is quantified by means of the entanglement fidelity as a function of the damping probability and the non-zero environmental temperature. The recovery scheme employed throughout our work applies, in principle, to arbitrary quantum codes and is the analogue of the perfect Knill-Laflamme recovery scheme adapted to the approximate quantum error correction framework for the GAD error model. We also analytically recover and/or clarify some previously known numerical results in the limiting case of vanishing temperature of the environment, the well-known traditional amplitude damping channel. In addition, our study suggests that degenerate stabilizer codes and self-complementary nonadditive codes are especially suitable for the error correction of the GAD noise model. Finally, comparing the properly normalized entanglement fidelities of the best performant stabilizer and nonadditive codes characterized by the same length, we show that nonadditive codes outperform stabilizer codes not only in terms of encoded dimension but also in terms of entanglement fidelity.Comment: 44 pages, 8 figures, improved v

    Role of Subgradients in Variational Analysis of Polyhedral Functions

    Full text link
    Understanding the role that subgradients play in various second-order variational analysis constructions can help us uncover new properties of important classes of functions in variational analysis. Focusing mainly on the behavior of the second subderivative and subgradient proto-derivative of polyhedral functions, functions with polyhedral epigraphs, we demonstrate that choosing the underlying subgradient, utilized in the definitions of these concepts, from the relative interior of the subdifferential of polyhedral functions ensures stronger second-order variational properties such as strict twice epi-differentiability and strict subgradient proto-differentiability. This allows us to characterize continuous differentiability of the proximal mapping and twice continuous differentiability of the Moreau envelope of polyhedral functions. We close the paper with proving the equivalence of metric regularity and strong metric regularity of a class of generalized equations at their nondegenerate solutions

    Constructing packings in Grassmannian manifolds via alternating projection

    Get PDF
    This paper describes a numerical method for finding good packings in Grassmannian manifolds equipped with various metrics. This investigation also encompasses packing in projective spaces. In each case, producing a good packing is equivalent to constructing a matrix that has certain structural and spectral properties. By alternately enforcing the structural condition and then the spectral condition, it is often possible to reach a matrix that satisfies both. One may then extract a packing from this matrix. This approach is both powerful and versatile. In cases where experiments have been performed, the alternating projection method yields packings that compete with the best packings recorded. It also extends to problems that have not been studied numerically. For example, it can be used to produce packings of subspaces in real and complex Grassmannian spaces equipped with the Fubini--Study distance; these packings are valuable in wireless communications. One can prove that some of the novel configurations constructed by the algorithm have packing diameters that are nearly optimal.Comment: 41 pages, 7 tables, 4 figure

    Barycentric Subspace Analysis on Manifolds

    Full text link
    This paper investigates the generalization of Principal Component Analysis (PCA) to Riemannian manifolds. We first propose a new and general type of family of subspaces in manifolds that we call barycentric subspaces. They are implicitly defined as the locus of points which are weighted means of k+1k+1 reference points. As this definition relies on points and not on tangent vectors, it can also be extended to geodesic spaces which are not Riemannian. For instance, in stratified spaces, it naturally allows principal subspaces that span several strata, which is impossible in previous generalizations of PCA. We show that barycentric subspaces locally define a submanifold of dimension k which generalizes geodesic subspaces.Second, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy of properly embedded linear subspaces of increasing dimension). We show that the Euclidean PCA minimizes the Accumulated Unexplained Variances by all the subspaces of the flag (AUV). Barycentric subspaces are naturally nested, allowing the construction of hierarchically nested subspaces. Optimizing the AUV criterion to optimally approximate data points with flags of affine spans in Riemannian manifolds lead to a particularly appealing generalization of PCA on manifolds called Barycentric Subspaces Analysis (BSA).Comment: Annals of Statistics, Institute of Mathematical Statistics, A Para\^itr
    • …
    corecore