464 research outputs found

    A Lagrangian penalty function method for monotone variational inequalities

    Get PDF
    A Lagrange-type penalty function method is proposed for a class of variational inequalities. The penalty function may have both positive and negative values. Each penalized subproblem is required to be solved only approximately. A condition under which a Lagrangian penalty function is exact, and an estimate for the penalty coefficient are given

    Exact Penalty Functions with Multidimensional Penalty Parameter and Adaptive Penalty Updates

    Full text link
    We present a general theory of exact penalty functions with vectorial (multidimensional) penalty parameter for optimization problems in infinite dimensional spaces. In comparison with the scalar case, the use of vectorial penalty parameters provides much more flexibility, allows one to adaptively and independently take into account the violation of each constraint during an optimization process, and often leads to a better overall performance of an optimization method using an exact penalty function. We obtain sufficient conditions for the local and global exactness of penalty functions with vectorial penalty parameters and study convergence of global exact penalty methods with several different penalty updating strategies. In particular, we present a new algorithmic approach to an analysis of the global exactness of penalty functions, which contains a novel characterisation of the global exactness property in terms of behaviour of sequences generated by certain optimization methods.Comment: In the second version, a number of small mistakes found in the paper was correcte

    Scalable Semidefinite Relaxation for Maximum A Posterior Estimation

    Full text link
    Maximum a posteriori (MAP) inference over discrete Markov random fields is a fundamental task spanning a wide spectrum of real-world applications, which is known to be NP-hard for general graphs. In this paper, we propose a novel semidefinite relaxation formulation (referred to as SDR) to estimate the MAP assignment. Algorithmically, we develop an accelerated variant of the alternating direction method of multipliers (referred to as SDPAD-LR) that can effectively exploit the special structure of the new relaxation. Encouragingly, the proposed procedure allows solving SDR for large-scale problems, e.g., problems on a grid graph comprising hundreds of thousands of variables with multiple states per node. Compared with prior SDP solvers, SDPAD-LR is capable of attaining comparable accuracy while exhibiting remarkably improved scalability, in contrast to the commonly held belief that semidefinite relaxation can only been applied on small-scale MRF problems. We have evaluated the performance of SDR on various benchmark datasets including OPENGM2 and PIC in terms of both the quality of the solutions and computation time. Experimental results demonstrate that for a broad class of problems, SDPAD-LR outperforms state-of-the-art algorithms in producing better MAP assignment in an efficient manner.Comment: accepted to International Conference on Machine Learning (ICML 2014
    • …
    corecore