290 research outputs found

    Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference

    Full text link
    We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems. The core of our method is a very efficient bounding procedure, which combines scalable semidefinite programming (SDP) and a cutting-plane method for seeking violated constraints. In order to further speed up the computation, several strategies have been exploited, including model reduction, warm start and removal of inactive constraints. We analyze the performance of the proposed method under different settings, and demonstrate that our method either outperforms or performs on par with state-of-the-art approaches. Especially when the connectivities are dense or when the relative magnitudes of the unary costs are low, we achieve the best reported results. Experiments show that the proposed algorithm achieves better approximation than the state-of-the-art methods within a variety of time budgets on challenging non-submodular MAP-MRF inference problems.Comment: 21 page

    Combinatorial persistency criteria for multicut and max-cut

    Full text link
    In combinatorial optimization, partial variable assignments are called persistent if they agree with some optimal solution. We propose persistency criteria for the multicut and max-cut problem as well as fast combinatorial routines to verify them. The criteria that we derive are based on mappings that improve feasible multicuts, respectively cuts. Our elementary criteria can be checked enumeratively. The more advanced ones rely on fast algorithms for upper and lower bounds for the respective cut problems and max-flow techniques for auxiliary min-cut problems. Our methods can be used as a preprocessing technique for reducing problem sizes or for computing partial optimality guarantees for solutions output by heuristic solvers. We show the efficacy of our methods on instances of both problems from computer vision, biomedical image analysis and statistical physics

    Submodular relaxation for inference in Markov random fields

    Full text link
    In this paper we address the problem of finding the most probable state of a discrete Markov random field (MRF), also known as the MRF energy minimization problem. The task is known to be NP-hard in general and its practical importance motivates numerous approximate algorithms. We propose a submodular relaxation approach (SMR) based on a Lagrangian relaxation of the initial problem. Unlike the dual decomposition approach of Komodakis et al., 2011 SMR does not decompose the graph structure of the initial problem but constructs a submodular energy that is minimized within the Lagrangian relaxation. Our approach is applicable to both pairwise and high-order MRFs and allows to take into account global potentials of certain types. We study theoretical properties of the proposed approach and evaluate it experimentally.Comment: This paper is accepted for publication in IEEE Transactions on Pattern Analysis and Machine Intelligenc

    Barrier Frank-Wolfe for Marginal Inference

    Full text link
    We introduce a globally-convergent algorithm for optimizing the tree-reweighted (TRW) variational objective over the marginal polytope. The algorithm is based on the conditional gradient method (Frank-Wolfe) and moves pseudomarginals within the marginal polytope through repeated maximum a posteriori (MAP) calls. This modular structure enables us to leverage black-box MAP solvers (both exact and approximate) for variational inference, and obtains more accurate results than tree-reweighted algorithms that optimize over the local consistency relaxation. Theoretically, we bound the sub-optimality for the proposed algorithm despite the TRW objective having unbounded gradients at the boundary of the marginal polytope. Empirically, we demonstrate the increased quality of results found by tightening the relaxation over the marginal polytope as well as the spanning tree polytope on synthetic and real-world instances.Comment: 25 pages, 12 figures, To appear in Neural Information Processing Systems (NIPS) 2015, Corrected reference and cleaned up bibliograph

    GRMA: Generalized Range Move Algorithms for the efficient optimization of MRFs

    Get PDF
    Markov Random Fields (MRF) have become an important tool for many vision applications, and the optimization of MRFs is a problem of fundamental importance. Recently, Veksler and Kumar et al. proposed the range move algorithms, which are some of the most successful optimizers. Instead of considering only two labels as in previous move-making algorithms, they explore a large search space over a range of labels in each iteration, and significantly outperform previous move-making algorithms. However, two problems have greatly limited the applicability of range move algorithms: 1) They are limited in the energy functions they can handle (i.e., only truncated convex functions); 2) They tend to be very slow compared to other move-making algorithms (e.g., �-expansion and ��-swap). In this paper, we propose two generalized range move algorithms (GRMA) for the efficient optimization of MRFs. To address the first problem, we extend the GRMAs to more general energy functions by restricting the chosen labels in each move so that the energy function is submodular on the chosen subset. Furthermore, we provide a feasible sufficient condition for choosing these subsets of labels. To address the second problem, we dynamically obtain the iterative moves by solving set cover problems. This greatly reduces the number of moves during the optimization.We also propose a fast graph construction method for the GRMAs. Experiments show that the GRMAs offer a great speedup over previous range move algorithms, while yielding competitive solutions

    Structured Prediction Problem Archive

    Get PDF
    Structured prediction problems are one of the fundamental tools in machinelearning. In order to facilitate algorithm development for their numericalsolution, we collect in one place a large number of datasets in easy to readformats for a diverse set of problem classes. We provide archival links todatasets, description of the considered problems and problem formats, and ashort summary of problem characteristics including size, number of instancesetc. For reference we also give a non-exhaustive selection of algorithmsproposed in the literature for their solution. We hope that this centralrepository will make benchmarking and comparison to established works easier.We welcome submission of interesting new datasets and algorithms for inclusionin our archive.<br

    A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems

    Get PDF
    International audienceSzeliski et al. published an influential study in 2006 on energy minimization methods for Markov Random Fields (MRF). This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the kinds of inference problems that have to be solved changed significantly. Specifically , the models today often include higher order interactions, flexible connectivity structures, large label-spaces of different car-dinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of more than 27 state-of-the-art optimization techniques on a corpus of 2,453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of Szeliski et al. for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types
    corecore