852,037 research outputs found

    Linear Superiorization for Infeasible Linear Programming

    Full text link
    Linear superiorization (abbreviated: LinSup) considers linear programming (LP) problems wherein the constraints as well as the objective function are linear. It allows to steer the iterates of a feasibility-seeking iterative process toward feasible points that have lower (not necessarily minimal) values of the objective function than points that would have been reached by the same feasiblity-seeking iterative process without superiorization. Using a feasibility-seeking iterative process that converges even if the linear feasible set is empty, LinSup generates an iterative sequence that converges to a point that minimizes a proximity function which measures the linear constraints violation. In addition, due to LinSup's repeated objective function reduction steps such a point will most probably have a reduced objective function value. We present an exploratory experimental result that illustrates the behavior of LinSup on an infeasible LP problem.Comment: arXiv admin note: substantial text overlap with arXiv:1612.0653

    Decoding by Linear Programming

    Get PDF
    This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector fRnf \in \R^n from corrupted measurements y=Af+ey = A f + e. Here, AA is an mm by nn (coding) matrix and ee is an arbitrary and unknown vector of errors. Is it possible to recover ff exactly from the data yy? We prove that under suitable conditions on the coding matrix AA, the input ff is the unique solution to the 1\ell_1-minimization problem (x1:=ixi\|x\|_{\ell_1} := \sum_i |x_i|) mingRnyAg1 \min_{g \in \R^n} \| y - Ag \|_{\ell_1} provided that the support of the vector of errors is not too large, e0:={i:ei0}ρm\|e\|_{\ell_0} := |\{i : e_i \neq 0\}| \le \rho \cdot m for some ρ>0\rho > 0. In short, ff can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; ff is recovered exactly even in situations where a significant fraction of the output is corrupted.Comment: 22 pages, 4 figures, submitte

    Reoptimizations in linear programming

    Get PDF
    Replacing a real process which we are concerned in with other more convenient for the study is called modeling. After the replacement, the model is analyzed and the results we get are expanded on that process. Mathematical models being more abstract, they are also more general and so, more important. Mathematical programming is known as analysis of various concepts of economic activities with the help of mathematical modelsReoptimization, linear programming, mathematical model

    Solving large scale linear programming

    Get PDF
    The interior point method (IPM) is now well established as a competitive technique for solving very large scale linear programming problems. The leading variant of the interior point method is the primal dual - predictor corrector algorithm due to Mehrotra. The main computational steps of this algorithm are the repeated calculation and solution of a large sparse positive definite system of equations. We describe an implementation of the predictor corrector IPM algorithm on MasPar, a massively parallel SIMD computer. At the heart of the implemen-tation is a parallel Cholesky factorization algorithm for sparse matrices. Our implementation uses a new scheme of mapping the matrix onto the processor grid of the MasPar, that results in a more efficient Cholesky factorization than previously suggested schemes. The IPM implementation uses the parallel unit of MasPar to speed up the factorization and other computationally intensive parts of the IPM. An impor-tant part of this implementation is the judicious division of data and computation between the front-end computer, that runs the main IPM algorithm, and the par-allel unit. Performanc

    Reformulations of mathematical programming problems as linear complementarity problems

    Get PDF
    A family of complementarity problems are defined as extensions of the well known Linear Complementarity Problem (LCP). These are (i.) Second Linear Complementarity Problem (SLCP) which is an LCP extended by introducing further equality restrictions and unrestricted variables, (ii.) Minimum Linear Complementarity Problem (MLCP) which is an LCP with additional variables not required to be complementary and with a linear objective function which is to be minimized, (iii.) Second Minimum Linear Complementarity Problem (SMLCP) which is an MLCP but the nonnegative restriction on one of each pair of complementary variables is relaxed so that it is allowed to be unrestricted in value. A number of well known mathematical programming problems, namely quadratic programming (convex, nonconvex, pseudoconvex nonconvex), bilinear programming, game theory, zero-one integer programming, the fixed charge problem, absolute value programming, variable separable programming are reformulated as members of this family of four complementarity problems
    corecore