8 research outputs found

    The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search

    Full text link
    This article presents a new search algorithm for the NP-hard problem of optimizing functions of binary variables that decompose according to a graphical model. It can be applied to models of any order and structure. The main novelty is a technique to constrain the search space based on the topology of the model. When pursued to the full search depth, the algorithm is guaranteed to converge to a global optimum, passing through a series of monotonously improving local optima that are guaranteed to be optimal within a given and increasing Hamming distance. For a search depth of 1, it specializes to Iterated Conditional Modes. Between these extremes, a useful tradeoff between approximation quality and runtime is established. Experiments on models derived from both illustrative and real problems show that approximations found with limited search depth match or improve those obtained by state-of-the-art methods based on message passing and linear programming.Comment: C++ Source Code available from http://hci.iwr.uni-heidelberg.de/software.ph

    Complexity of Discrete Energy Minimization Problems

    Full text link
    Discrete energy minimization is widely-used in computer vision and machine learning for problems such as MAP inference in graphical models. The problem, in general, is notoriously intractable, and finding the global optimal solution is known to be NP-hard. However, is it possible to approximate this problem with a reasonable ratio bound on the solution quality in polynomial time? We show in this paper that the answer is no. Specifically, we show that general energy minimization, even in the 2-label pairwise case, and planar energy minimization with three or more labels are exp-APX-complete. This finding rules out the existence of any approximation algorithm with a sub-exponential approximation ratio in the input size for these two problems, including constant factor approximations. Moreover, we collect and review the computational complexity of several subclass problems and arrange them on a complexity scale consisting of three major complexity classes -- PO, APX, and exp-APX, corresponding to problems that are solvable, approximable, and inapproximable in polynomial time. Problems in the first two complexity classes can serve as alternative tractable formulations to the inapproximable ones. This paper can help vision researchers to select an appropriate model for an application or guide them in designing new algorithms.Comment: ECCV'16 accepte

    Advances in Graph-Cut Optimization: Multi-Surface Models, Label Costs, and Hierarchical Costs

    Get PDF
    Computer vision is full of problems that are elegantly expressed in terms of mathematical optimization, or energy minimization. This is particularly true of low-level inference problems such as cleaning up noisy signals, clustering and classifying data, or estimating 3D points from images. Energies let us state each problem as a clear, precise objective function. Minimizing the correct energy would, hypothetically, yield a good solution to the corresponding problem. Unfortunately, even for low-level problems we are confronted by energies that are computationally hard—often NP-hard—to minimize. As a consequence, a rather large portion of computer vision research is dedicated to proposing better energies and better algorithms for energies. This dissertation presents work along the same line, specifically new energies and algorithms based on graph cuts. We present three distinct contributions. First we consider biomedical segmentation where the object of interest comprises multiple distinct regions of uncertain shape (e.g. blood vessels, airways, bone tissue). We show that this common yet difficult scenario can be modeled as an energy over multiple interacting surfaces, and can be globally optimized by a single graph cut. Second, we introduce multi-label energies with label costs and provide algorithms to minimize them. We show how label costs are useful for clustering and robust estimation problems in vision. Third, we characterize a class of energies with hierarchical costs and propose a novel hierarchical fusion algorithm with improved approximation guarantees. Hierarchical costs are natural for modeling an array of difficult problems, e.g. segmentation with hierarchical context, simultaneous estimation of motions and homographies, or detecting hierarchies of patterns

    Robust inversion and detection techniques for improved imaging performance

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn this thesis we aim to improve the performance of information extraction from imaging systems through three thrusts. First, we develop improved image formation methods for physics-based, complex-valued sensing problems. We propose a regularized inversion method that incorporates prior information about the underlying field into the inversion framework for ultrasound imaging. We use experimental ultrasound data to compute inversion results with the proposed formulation and compare it with conventional inversion techniques to show the robustness of the proposed technique to loss of data. Second, we propose methods that combine inversion and detection in a unified framework to improve imaging performance. This framework is applicable for cases where the underlying field is label-based such that each pixel of the underlying field can only assume values from a discrete, limited set. We consider this unified framework in the context of combinatorial optimization and propose graph-cut based methods that would result in label-based images, thereby eliminating the need for a separate detection step. Finally, we propose a robust method of object detection from microscopic nanoparticle images. In particular, we focus on a portable, low cost interferometric imaging platform and propose robust detection algorithms using tools from computer vision. We model the electromagnetic image formation process and use this model to create an enhanced detection technique. The effectiveness of the proposed technique is demonstrated using manually labeled ground-truth data. In addition, we extend these tools to develop a detection based autofocusing algorithm tailored for the high numerical aperture interferometric microscope

    Methods for Inference in Graphical Models

    Get PDF
    Graphical models provide a flexible, powerful and compact way to model relationships between random variables, and have been applied with great success in many domains. Combining prior beliefs with observed evidence to form a prediction is called inference. Problems of great interest include finding a configuration with highest probability (MAP inference) or solving for the distribution over a subset of variables (marginal inference). Further, these methods are often critical subroutines for learning the relationships. However, inference is computationally intractable in general. Hence, much effort has focused on two themes: finding subdomains where exact inference is solvable efficiently, or identifying approximate methods that work well. We explore both these themes, restricting attention to undirected graphical models with discrete variables. First we address exact MAP inference by advancing the recent method of reducing the problem to finding a maximum weight stable set (MWSS) on a derived graph, which, if perfect, admits polynomial time inference. We derive new results for this approach, including a general decomposition theorem for models of any order and number of labels, extensions of results for binary pairwise models with submodular cost functions to higher order, and a characterization of which binary pairwise models can be efficiently solved with this method. This clarifies the power of the approach on this class of models, improves our toolbox and provides insight into the range of tractable models. Next we consider methods of approximate inference, with particular emphasis on the Bethe approximation, which is in widespread use and has proved remarkably effective, yet is still far from being completely understood. We derive new formulations and properties of the derivatives of the Bethe free energy, then use these to establish an algorithm to compute log of the optimum Bethe partition function to arbitrary epsilon-accuracy. Further, if the model is attractive, we demonstrate a fully polynomial-time approximation scheme (FPTAS), which is an important theoretical result, and demonstrate its practical applications. Next we explore ways to tease apart the two aspects of the Bethe approximation, i.e. the polytope relaxation and the entropy approximation. We derive analytic results, show how optimization may be explored over various polytopes in practice, even for large models, and remark on the observed performance compared to the true distribution and the tree-reweighted (TRW) approximation. This reveals important novel observations and helps guide inference in practice. Finally, we present results related to clamping a selection of variables in a model. We derive novel lower bounds on an array of approximate partition functions based only on the model's topology. Further, we show that in an attractive binary pairwise model, clamping any variable and summing over the approximate sub-partition functions can only increase (hence improve) the Bethe approximation, then use this to provide a new, short proof that the Bethe partition function lower bounds the true value for this class of models. The bulk of this work focuses on the class of binary, pairwise models, but several results apply more generally

    Inference on Highly-Connected Discrete Graphical Models with Applications to Visual Object Recognition

    Get PDF
    Das Erkennen und Finden von Objekten in Bildern ist eines der wichtigsten Teilprobleme in modernen Bildverarbeitungssystemen. Während die Detektion von starren Objekten aus beliebigen Blickwinkeln vor einigen Jahren noch als schwierig galt, verfolgt die momentane Forschung das Ziel, verformbare, artikulierte Objekte zu erkennen und zu detektieren. Bedingt durch die hohe Varianz innerhalb der Objektklasse, Verdeckungen und Hintergrund mit ähnlichem Aussehen, ist dies jedoch sehr schwer. Des Weiteren wird die Klassifikation der Objekte dadurch erschwert, dass die Beschreibung von ganzheitlichen Modellen häufig in dem dazugehörigen Merkmalsraum keine Cluster bildet. Daher hat sich in den letzten Jahren die Beschreibung von Objekten weg von einem ganzheitlichen hin zu auf Teilen basierenden Modellen verschoben. Dabei wird ein Objekt aus einer Menge von individuellen Teilen zusammen mit Informationen über deren Abhängigkeiten beschrieben. In diesem Zusammenhang stellen wir ein vielseitig anwendbares und erweiterbares Modell zur auf Teilen basierenden Objekterkennung vor. Die Theorie über probabilistische graphische Modelle ermöglicht es, aus manuell notierten Trainingsdaten alle Modellparameter in einer mathematisch fundierten Weise zu lernen. Ein besonderer Augenmerk liegt des Weiteren auf der Berechnung der optimalen Pose eines Objektes in einem Bild. Im probabilistischem Sinne ist dies die Objektbeschreibung mit der maximalen a posteriori Wahrscheinlichkeit (MAP). Das Finden dieser wird auch als das MAP-Problem bezeichnet. Sowohl das Lernen der Modellparameter als auch das Finden der optimalen Objektpose bedingen das Lösen von kombinatorischen Optimierungsproblemen, die in der Regel NP-schwer sind. Beschränkt man sich auf effizient berechenbare Modelle, können viele wichtige Abhängigkeiten zwischen den einzelnen Teilen nicht mehr beschrieben werden. Daher geht die Tendenz in der Modellierung zu generellen Modellen, welche weitaus komplexere Optimierungsprobleme mit sich bringen. In dieser Arbeit schlagen wir zwei neue Methoden zur Lösung des MAP-Problems für generelle diskrete Modelle vor. Unser erster Ansatz transformiert das MAP-Problem in ein Kürzeste-Wege-Problem, welches mittels einer A*-Suche unter Verwendung einer zulässigen Heuristik gelöst wird. Die zulässige Heuristik basiert auf einer azyklisch strukturierter Abschätzung des urspr"unglichen Problems. Da diese Methode für Modelle mit sehr vielen Modellteilen nicht mehr anwendbar ist, betrachten wir alternative Möglichkeiten. Hierzu transformieren wir das kombinatorische Problem unter Zuhilfenahme von exponentiellen Familien in ein lineares Programm. Dies ist jedoch, bedingt durch die große Anzahl von affinen Nebenbedingungen, in dieser Form praktisch nicht lösbar. Daher schlagen wir eine neuartige Zerlegung des MAP Problems in Teilprobleme mit einer k-fan Struktur vor. Alle diese Teilprobleme sind trotz ihrer zyklischen Struktur mit unserer A*-Methode effizient lösbar. Mittels der Lagrange-Methode und dieser Zerlegung erhalten wir bessere Relaxationen als mit der Standardrelaxation über dem lokalen Polytope. In Experimenten auf künstlichen und realen Daten wurden diese Verfahren mit Standardverfahren aus dem Bereich der Bildverarbeitung und kommerzieller Software zum Lösen von lineare und ganzzahlige Optimierungsproblemen verglichen. Abgesehen von Modellen mit sehr vielen Teilen zeigte der A*-Ansatz die besten Ergebnisse im Bezug auf Optimalität und Laufzeit. Auch die auf k-fan Zerlegungen basierenden Methode zeigte viel versprechende Ergebnisse bezüglich der Optimalität, konvergierte jedoch im Allgemeinen sehr langsam

    Discrete graphical models -- an optimization perspective

    Full text link
    This monograph is about discrete energy minimization for discrete graphical models. It considers graphical models, or, more precisely, maximum a posteriori inference for graphical models, purely as a combinatorial optimization problem. Modeling, applications, probabilistic interpretations and many other aspects are either ignored here or find their place in examples and remarks only. It covers the integer linear programming formulation of the problem as well as its linear programming, Lagrange and Lagrange decomposition-based relaxations. In particular, it provides a detailed analysis of the polynomially solvable acyclic and submodular problems, along with the corresponding exact optimization methods. Major approximate methods, such as message passing and graph cut techniques are also described and analyzed comprehensively. The monograph can be useful for undergraduate and graduate students studying optimization or graphical models, as well as for experts in optimization who want to have a look into graphical models. To make the monograph suitable for both categories of readers we explicitly separate the mathematical optimization background chapters from those specific to graphical models.Comment: 270 page

    JFPC 2019 - Actes des 15es Journées Francophones de Programmation par Contraintes

    Get PDF
    National audienceLes JFPC (Journées Francophones de Programmation par Contraintes) sont le principal congrès de la communauté francophone travaillant sur les problèmes de satisfaction de contraintes (CSP), le problème de la satisfiabilité d'une formule logique propositionnelle (SAT) et/ou la programmation logique avec contraintes (CLP). La communauté de programmation par contraintes entretient également des liens avec la recherche opérationnelle (RO), l'analyse par intervalles et différents domaines de l'intelligence artificielle.L'efficacité des méthodes de résolution et l'extension des modèles permettent à la programmation par contraintes de s'attaquer à des applications nombreuses et variées comme la logistique, l'ordonnancement de tâches, la conception d'emplois du temps, la conception en robotique, l'étude du génôme en bio-informatique, l'optimisation de pratiques agricoles, etc.Les JFPC se veulent un lieu convivial de rencontres, de discussions et d'échanges pour la communauté francophone, en particulier entre doctorants, chercheurs confirmés et industriels. L'importance des JFPC est reflétée par la part considérable (environ un tiers) de la communauté francophone dans la recherche mondiale dans ce domaine.Patronnées par l'AFPC (Association Française pour la Programmation par Contraintes), les JFPC 2019 ont lieu du 12 au 14 Juin 2019 à l'IMT Mines Albi et sont organisées par Xavier Lorca (président du comité scientifique) et par Élise Vareilles (présidente du comité d'organisation)
    corecore