8 research outputs found
Don't Be Strict in Local Search!
Local Search is one of the fundamental approaches to combinatorial
optimization and it is used throughout AI. Several local search algorithms are
based on searching the k-exchange neighborhood. This is the set of solutions
that can be obtained from the current solution by exchanging at most k
elements. As a rule of thumb, the larger k is, the better are the chances of
finding an improved solution. However, for inputs of size n, a na\"ive
brute-force search of the k-exchange neighborhood requires n to the power of
O(k) time, which is not practical even for very small values of k.
Fellows et al. (IJCAI 2009) studied whether this brute-force search is
avoidable and gave positive and negative answers for several combinatorial
problems. They used the notion of local search in a strict sense. That is, an
improved solution needs to be found in the k-exchange neighborhood even if a
global optimum can be found efficiently.
In this paper we consider a natural relaxation of local search, called
permissive local search (Marx and Schlotter, IWPEC 2009) and investigate
whether it enhances the domain of tractable inputs. We exemplify this approach
on a fundamental combinatorial problem, Vertex Cover. More precisely, we show
that for a class of inputs, finding an optimum is hard, strict local search is
hard, but permissive local search is tractable.
We carry out this investigation in the framework of parameterized complexity.Comment: (author's self-archived copy
Algorithms for partitioning problem
Orientador: Eduardo Candido XavierDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Investigamos Problemas de Particionamento de objetos que têm relações de similaridade entre si. Instâncias desses problemas podem ser representados por grafos, em que objetos são vértices e a similaridade entre dois objetos é representada por um valor associado à aresta que liga os objetos. O objetivo do problema é particionar os objetos de tal forma que objetos similares pertençam a um mesmo subconjunto de objetos. Nosso foco é o estudo de algoritmos para clusterização em grafos, onde deve-se determinar clusteres tal que arestas ligando vértices de clusteres diferentes tenham peso baixo e ao mesmo tempo as arestas entre vértices de um mesmo cluster tenha peso alto. Problemas de particionamento e clusterização possuem aplicações em diversas áreas, como mineração de dados, recuperação de informação, biologia computacional, entre outros. No caso geral estes problemas são NP-Difíceis. Nosso interesse é investigar algoritmos eficientes (com complexidade de tempo polinomial) e que gerem boas soluções, como Heurísticas, Metaheurísticas e Algoritmos de Aproximação. Dentre os algoritmos estudados, implementamos os mais promissores e fazemos uma comparação de seus resultados utilizando instâncias geradas computacionalmente. Por fim, propomos um algoritmo que utiliza a metaheurística GRASP para o problema considerado e mostramos que, para as instâncias de testes geradas, nosso algoritmo obtém melhores resultadosAbstract: In this work we investigate Partitioning Problems of objects for which a similarity relations is defined. Instance to these problems can be represented by graphs where vertices are objects, and the similarity between two objects is represented by a value associated with an edge that connects objects. The problem objective is to partition the objects such that similar objects belong to the same subset of objects. We study clustering algorithms for graphs, where clusters must be determined such that edges connecting vertices of different clusters have low weight while the edges between vertices of a same cluster have high weight. Partitioning and clustering problems have applications in many areas, such as data mining, information retrieval, computational biology, and others. Many versions of these problems are NP-Hard. Our interest is to study eficient algorithms (with polynomial time complexity) that generate good solutions, such as Heuristics, Approximation Algorithms and Metaheuristics. We implemented the most promising algorithms and compared their results using instances generated computationally. Finally, we propose a GRASP based algorithm for the partition and clustering problem and show that, for the generated test instances, our algorithm achieves better resultsMestradoMestre em Ciência da Computaçã
Higher-order inference in conditional random fields using submodular functions
Higher-order and dense conditional random fields (CRFs) are expressive graphical
models which have been very successful in low-level computer vision applications
such as semantic segmentation, and stereo matching. These models are able to
capture long-range interactions and higher-order image statistics much better
than pairwise CRFs. This expressive power comes at a price though - inference
problems in these models are computationally very demanding. This is a
particular challenge in computer vision, where fast inference is important and
the problem involves millions of pixels.
In this thesis, we look at how submodular functions can help us designing
efficient inference methods for higher-order and dense CRFs. Submodular
functions are special discrete functions that have important properties from
an optimisation perspective, and are closely related to convex functions. We
use submodularity in a two-fold manner: (a) to design efficient MAP inference
algorithm for a robust higher-order model that generalises the widely-used
truncated convex models, and (b) to glean insights into a recently proposed
variational inference algorithm which give us a principled approach for applying
it efficiently to higher-order and dense CRFs
A Constant Factor Approximation Algorithm for a Class of Classification Problems
In a traditional classification problem, we wish to assign labels from a set L to each of n objects so that the labeling is consistent with some observed data that includes pairwise relationships among the objects. Kleinberg and Tardos recently formulated a general classification problem of this type, the "metric labeling problem", and gave an O(log jLj log log jLj) approximation algorithm for it. The algorithm is based on solving a linear programming relaxation of a natural integer program and then randomized rounding. In this paper we consider an important case of the metric labeling problem, in which the metric is the truncated linear metric. This is a natural non-uniform and robust metric, and it arises in a number of applications. We give a combinatorial 4-approximation algorithm for this metric. Our algorithm is a natural local search method, where the local steps are based on minimum cut computations in an appropriately constructed graph. Our method extends previous work by Boy..
A constant factor approximation algorithm for a class of classification problems
In a traditional classification problem, we wish to assign labels
from a set L to each of n objects so that the labeling is
consistent with some observed data that includes pairwise relationships
among the objects. Kleinberg and Tardos recently
formulated a general classification problem of this type, the
“metric labeling problem”, and gave an O(logjLlogjL)
approximation algorithm for it. The algorithm is based on
solving a linear programming relaxation of a natural integer
program and then randomized rounding