39,286 research outputs found

    Inapproximability of Combinatorial Optimization Problems

    Full text link
    We survey results on the hardness of approximating combinatorial optimization problems

    Complexity of Discrete Energy Minimization Problems

    Full text link
    Discrete energy minimization is widely-used in computer vision and machine learning for problems such as MAP inference in graphical models. The problem, in general, is notoriously intractable, and finding the global optimal solution is known to be NP-hard. However, is it possible to approximate this problem with a reasonable ratio bound on the solution quality in polynomial time? We show in this paper that the answer is no. Specifically, we show that general energy minimization, even in the 2-label pairwise case, and planar energy minimization with three or more labels are exp-APX-complete. This finding rules out the existence of any approximation algorithm with a sub-exponential approximation ratio in the input size for these two problems, including constant factor approximations. Moreover, we collect and review the computational complexity of several subclass problems and arrange them on a complexity scale consisting of three major complexity classes -- PO, APX, and exp-APX, corresponding to problems that are solvable, approximable, and inapproximable in polynomial time. Problems in the first two complexity classes can serve as alternative tractable formulations to the inapproximable ones. This paper can help vision researchers to select an appropriate model for an application or guide them in designing new algorithms.Comment: ECCV'16 accepte

    Algorithms for Game Metrics

    Get PDF
    Simulation and bisimulation metrics for stochastic systems provide a quantitative generalization of the classical simulation and bisimulation relations. These metrics capture the similarity of states with respect to quantitative specifications written in the quantitative {\mu}-calculus and related probabilistic logics. We first show that the metrics provide a bound for the difference in long-run average and discounted average behavior across states, indicating that the metrics can be used both in system verification, and in performance evaluation. For turn-based games and MDPs, we provide a polynomial-time algorithm for the computation of the one-step metric distance between states. The algorithm is based on linear programming; it improves on the previous known exponential-time algorithm based on a reduction to the theory of reals. We then present PSPACE algorithms for both the decision problem and the problem of approximating the metric distance between two states, matching the best known algorithms for Markov chains. For the bisimulation kernel of the metric our algorithm works in time O(n^4) for both turn-based games and MDPs; improving the previously best known O(n^9\cdot log(n)) time algorithm for MDPs. For a concurrent game G, we show that computing the exact distance between states is at least as hard as computing the value of concurrent reachability games and the square-root-sum problem in computational geometry. We show that checking whether the metric distance is bounded by a rational r, can be done via a reduction to the theory of real closed fields, involving a formula with three quantifier alternations, yielding O(|G|^O(|G|^5)) time complexity, improving the previously known reduction, which yielded O(|G|^O(|G|^7)) time complexity. These algorithms can be iterated to approximate the metrics using binary search.Comment: 27 pages. Full version of the paper accepted at FSTTCS 200

    A Newton-bracketing method for a simple conic optimization problem

    Full text link
    For the Lagrangian-DNN relaxation of quadratic optimization problems (QOPs), we propose a Newton-bracketing method to improve the performance of the bisection-projection method implemented in BBCPOP [to appear in ACM Tran. Softw., 2019]. The relaxation problem is converted into the problem of finding the largest zero yy^* of a continuously differentiable (except at yy^*) convex function g:RRg : \mathbb{R} \rightarrow \mathbb{R} such that g(y)=0g(y) = 0 if yyy \leq y^* and g(y)>0g(y) > 0 otherwise. In theory, the method generates lower and upper bounds of yy^* both converging to yy^*. Their convergence is quadratic if the right derivative of gg at yy^* is positive. Accurate computation of g(y)g'(y) is necessary for the robustness of the method, but it is difficult to achieve in practice. As an alternative, we present a secant-bracketing method. We demonstrate that the method improves the quality of the lower bounds obtained by BBCPOP and SDPNAL+ for binary QOP instances from BIQMAC. Moreover, new lower bounds for the unknown optimal values of large scale QAP instances from QAPLIB are reported.Comment: 19 pages, 2 figure

    Robust optimization with incremental recourse

    Full text link
    In this paper, we consider an adaptive approach to address optimization problems with uncertain cost parameters. Here, the decision maker selects an initial decision, observes the realization of the uncertain cost parameters, and then is permitted to modify the initial decision. We treat the uncertainty using the framework of robust optimization in which uncertain parameters lie within a given set. The decision maker optimizes so as to develop the best cost guarantee in terms of the worst-case analysis. The recourse decision is ``incremental"; that is, the decision maker is permitted to change the initial solution by a small fixed amount. We refer to the resulting problem as the robust incremental problem. We study robust incremental variants of several optimization problems. We show that the robust incremental counterpart of a linear program is itself a linear program if the uncertainty set is polyhedral. Hence, it is solvable in polynomial time. We establish the NP-hardness for robust incremental linear programming for the case of a discrete uncertainty set. We show that the robust incremental shortest path problem is NP-complete when costs are chosen from a polyhedral uncertainty set, even in the case that only one new arc may be added to the initial path. We also address the complexity of several special cases of the robust incremental shortest path problem and the robust incremental minimum spanning tree problem

    Moment-Matching Polynomials

    Full text link
    We give a new framework for proving the existence of low-degree, polynomial approximators for Boolean functions with respect to broad classes of non-product distributions. Our proofs use techniques related to the classical moment problem and deviate significantly from known Fourier-based methods, which require the underlying distribution to have some product structure. Our main application is the first polynomial-time algorithm for agnostically learning any function of a constant number of halfspaces with respect to any log-concave distribution (for any constant accuracy parameter). This result was not known even for the case of learning the intersection of two halfspaces without noise. Additionally, we show that in the "smoothed-analysis" setting, the above results hold with respect to distributions that have sub-exponential tails, a property satisfied by many natural and well-studied distributions in machine learning. Given that our algorithms can be implemented using Support Vector Machines (SVMs) with a polynomial kernel, these results give a rigorous theoretical explanation as to why many kernel methods work so well in practice
    corecore