79 research outputs found

    Efficient algorithms for the minimum cost perfect matching problem on general graphs

    Get PDF
    Ankara : Department of Industrial Engineering and the Institute of Engineering and Sciences of Bilkent Univ., 1993.Thesis (Master's) -- Bilkent University, 1993Includes bibliographical refences.The minimum cost perfect matching problem is one of the rare combinatorial optimization problems for which polynomial time algorithms exist. Matching algorithms find applications in Postman Problem, Planar Multicommodity Flow Problem, in heuristics to the well known Traveling Salesman Problem, Vehicle Scheduling Problem, Graph Partitioning Problem, Set Partitioning Problem, in VLSI, et cetera. In this thesis, reviewing the existing primal-dual approaches in the literature, we present two efficient algorithms for the minimum cost perfect matching problem on general graphs. In both of the algorithms, we achieved drastic reductions in the total number of time consuming operations such as scanning, updating dual variables and reduced costs. Detailed computational analysis on randomly generated graphs has shown the proposed algorithms to be several times faster than other algorithms in the literature. Hence, we conjecture that employment of the new algorithms in the solution methods of above stated important problems would speed them up significantly.Atamtürk, AlperM.S

    Power System State Estimation and Bad Data Detection by Means of Conic Relaxation

    Get PDF
    This paper is concerned with the power system state estimation problem, which aims to find the unknown operating point of a power network based on a set of available measurements. We design a penalized semidefinite programming (SDP) relaxation whose objective function consists of a surrogate for rank and an l1-norm penalty accounting for noise. Although the proposed method does not rely on initialization, its performance can be improved in presence of an initial guess for the solution. First, a sufficient condition is derived with respect to the closeness of the initial guess to the true solution to guarantee the success of the penalized SDP relaxation in the noiseless case. Second, we show that a limited number of incorrect measurements with arbitrary values have no effect on the recovery of the true solution. Furthermore, we develop a bound for the accuracy of the estimation in the case where a limited number of measurements are corrupted with arbitrarily large values and the remaining measurements are perturbed with modest noise values. The proposed technique is demonstrated on a large-scale 1354-bus European system

    On the convex hull of convex quadratic optimization problems with indicators

    Full text link
    We consider the convex quadratic optimization problem with indicator variables and arbitrary constraints on the indicators. We show that a convex hull description of the associated mixed-integer set in an extended space with a quadratic number of additional variables consists of a single positive semidefinite constraint (explicitly stated) and linear constraints. In particular, convexification of this class of problems reduces to describing a polyhedral set in an extended formulation. While the vertex representation of this polyhedral set is exponential and an explicit linear inequality description may not be readily available in general, we derive a compact mixed-integer linear formulation whose solutions coincide with the vertices of the polyhedral set. We also give descriptions in the original space of variables: we provide a description based on an infinite number of conic-quadratic inequalities, which are ``finitely generated." In particular, it is possible to characterize whether a given inequality is necessary to describe the convex hull. The new theory presented here unifies several previously established results, and paves the way toward utilizing polyhedral methods to analyze the convex hull of mixed-integer nonlinear sets

    State-driven Implicit Modeling for Sparsity and Robustness in Neural Networks

    Full text link
    Implicit models are a general class of learning models that forgo the hierarchical layer structure typical in neural networks and instead define the internal states based on an ``equilibrium'' equation, offering competitive performance and reduced memory consumption. However, training such models usually relies on expensive implicit differentiation for backward propagation. In this work, we present a new approach to training implicit models, called State-driven Implicit Modeling (SIM), where we constrain the internal states and outputs to match that of a baseline model, circumventing costly backward computations. The training problem becomes convex by construction and can be solved in a parallel fashion, thanks to its decomposable structure. We demonstrate how the SIM approach can be applied to significantly improve sparsity (parameter reduction) and robustness of baseline models trained on FashionMNIST and CIFAR-100 datasets
    corecore