110,062 research outputs found

    Tropical complementarity problems and Nash equilibria

    Full text link
    Linear complementarity programming is a generalization of linear programming which encompasses the computation of Nash equilibria for bimatrix games. While the latter problem is PPAD-complete, we show that the tropical analogue of the complementarity problem associated with Nash equilibria can be solved in polynomial time. Moreover, we prove that the Lemke--Howson algorithm carries over the tropical setting and performs a linear number of pivots in the worst case. A consequence of this result is a new class of (classical) bimatrix games for which Nash equilibria computation can be done in polynomial time

    Polynomial Time Algorithm for Min-Ranks of Graphs with Simple Tree Structures

    Full text link
    The min-rank of a graph was introduced by Haemers (1978) to bound the Shannon capacity of a graph. This parameter of a graph has recently gained much more attention from the research community after the work of Bar-Yossef et al. (2006). In their paper, it was shown that the min-rank of a graph G characterizes the optimal scalar linear solution of an instance of the Index Coding with Side Information (ICSI) problem described by the graph G. It was shown by Peeters (1996) that computing the min-rank of a general graph is an NP-hard problem. There are very few known families of graphs whose min-ranks can be found in polynomial time. In this work, we introduce a new family of graphs with efficiently computed min-ranks. Specifically, we establish a polynomial time dynamic programming algorithm to compute the min-ranks of graphs having simple tree structures. Intuitively, such graphs are obtained by gluing together, in a tree-like structure, any set of graphs for which the min-ranks can be determined in polynomial time. A polynomial time algorithm to recognize such graphs is also proposed.Comment: Accepted by Algorithmica, 30 page

    An Algorithmic Theory of Integer Programming

    Full text link
    We study the general integer programming problem where the number of variables nn is a variable part of the input. We consider two natural parameters of the constraint matrix AA: its numeric measure aa and its sparsity measure dd. We show that integer programming can be solved in time g(a,d)poly(n,L)g(a,d)\textrm{poly}(n,L), where gg is some computable function of the parameters aa and dd, and LL is the binary encoding length of the input. In particular, integer programming is fixed-parameter tractable parameterized by aa and dd, and is solvable in polynomial time for every fixed aa and dd. Our results also extend to nonlinear separable convex objective functions. Moreover, for linear objectives, we derive a strongly-polynomial algorithm, that is, with running time g(a,d)poly(n)g(a,d)\textrm{poly}(n), independent of the rest of the input data. We obtain these results by developing an algorithmic framework based on the idea of iterative augmentation: starting from an initial feasible solution, we show how to quickly find augmenting steps which rapidly converge to an optimum. A central notion in this framework is the Graver basis of the matrix AA, which constitutes a set of fundamental augmenting steps. The iterative augmentation idea is then enhanced via the use of other techniques such as new and improved bounds on the Graver basis, rapid solution of integer programs with bounded variables, proximity theorems and a new proximity-scaling algorithm, the notion of a reduced objective function, and others. As a consequence of our work, we advance the state of the art of solving block-structured integer programs. In particular, we develop near-linear time algorithms for nn-fold, tree-fold, and 22-stage stochastic integer programs. We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified proximity-scaling algorith
    • …
    corecore