23 research outputs found
Understanding model counting for -acyclic CNF-formulas
We extend the knowledge about so-called structural restrictions of
by giving a polynomial time algorithm for -acyclic
. In contrast to previous algorithms in the area, our algorithm
does not proceed by dynamic programming but works along an elimination order,
solving a weighted version of constraint satisfaction. Moreover, we give
evidence that this deviation from more standard algorithm is not a coincidence,
but that there is likely no dynamic programming algorithm of the usual style
for -acyclic
{SETH}-Based Lower Bounds for Subset Sum and Bicriteria Path
Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial -time algorithm for Subset-Sum on numbers and target cannot be improved to time for any , unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of given instances of Subset-Sum is a YES instance requires time . As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with edges and edge lengths bounded by , we show that the pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to , in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017)
On the van der Waerden numbers w(2;3,t)
We present results and conjectures on the van der Waerden numbers w(2;3,t)
and on the new palindromic van der Waerden numbers pdw(2;3,t). We have computed
the new number w(2;3,19) = 349, and we provide lower bounds for 20 <= t <= 39,
where for t <= 30 we conjecture these lower bounds to be exact. The lower
bounds for 24 <= t <= 30 refute the conjecture that w(2;3,t) <= t^2, and we
present an improved conjecture. We also investigate regularities in the good
partitions (certificates) to better understand the lower bounds.
Motivated by such reglarities, we introduce *palindromic van der Waerden
numbers* pdw(k; t_0,...,t_{k-1}), defined as ordinary van der Waerden numbers
w(k; t_0,...,t_{k-1}), however only allowing palindromic solutions (good
partitions), defined as reading the same from both ends. Different from the
situation for ordinary van der Waerden numbers, these "numbers" need actually
to be pairs of numbers. We compute pdw(2;3,t) for 3 <= t <= 27, and we provide
lower bounds, which we conjecture to be exact, for t <= 35.
All computations are based on SAT solving, and we discuss the various
relations between SAT solving and Ramsey theory. Especially we introduce a
novel (open-source) SAT solver, the tawSolver, which performs best on the SAT
instances studied here, and which is actually the original DLL-solver, but with
an efficient implementation and a modern heuristic typical for look-ahead
solvers (applying the theory developed in the SAT handbook article of the
second author).Comment: Second version 25 pages, updates of numerical data, improved
formulations, and extended discussions on SAT. Third version 42 pages, with
SAT solver data (especially for new SAT solver) and improved representation.
Fourth version 47 pages, with updates and added explanation
Multi-objective optimization in graphical models
Many real-life optimization problems are combinatorial, i.e. they concern a choice of the best solution from a finite but exponentially
large set of alternatives. Besides, the solution quality of many of these problems can often be evaluated from several points of view
(a.k.a. criteria). In that case, each criterion may be described by a different objective function. Some important and well-known
multicriteria scenarios are:
· In investment optimization one wants to minimize risk and maximize benefits.
· In travel scheduling one wants to minimize time and cost.
· In circuit design one wants to minimize circuit area, energy consumption and maximize speed.
· In knapsack problems one wants to minimize load weight and/or volume and maximize its economical value.
The previous examples illustrate that, in many cases, these multiple criteria are incommensurate (i.e., it is difficult or impossible to
combine them into a single criterion) and conflicting (i.e., solutions that are good with respect one criterion are likely to be bad with
respect to another). Taking into account simultaneously the different criteria is not trivial and several notions of optimality have been
proposed. Independently of the chosen notion of optimality, computing optimal solutions represents an important current research
challenge.
Graphical models are a knowledge representation tool widely used in the Artificial Intelligence field. They seem to be specially
suitable for combinatorial problems. Roughly, graphical models are graphs in which nodes represent variables and the (lack of) arcs
represent conditional independence assumptions. In addition to the graph structure, it is necessary to specify its micro-structure
which tells how particular combinations of instantiations of interdependent variables interact. The graphical model framework
provides a unifying way to model a broad spectrum of systems and a collection of general algorithms to efficiently solve them.
In this Thesis we integrate multi-objective optimization problems into the graphical model paradigm and study how algorithmic
techniques developed in the graphical model context can be extended to multi-objective optimization problems. As we show, multiobjective
optimization problems can be formalized as a particular case of graphical models using the semiring-based framework. It
is, to the best of our knowledge, the first time that graphical models in general, and semiring-based problems in particular are used to
model an optimization problem in which the objective function is partially ordered. Moreover, we show that most of the solving
techniques for mono-objective optimization problems can be naturally extended to the multi-objective context. The result of our work
is the mathematical formalization of multi-objective optimization problems and the development of a set of multiobjective solving
algorithms that have been proved to be efficient in a number of benchmarks.Muchos problemas reales de optimización son combinatorios, es decir, requieren de la elección de la mejor solución (o solución
óptima) dentro de un conjunto finito pero exponencialmente grande de alternativas. Además, la mejor solución de muchos de estos
problemas es, a menudo, evaluada desde varios puntos de vista (también llamados criterios). Es este caso, cada criterio puede ser
descrito por una función objetivo. Algunos escenarios multi-objetivo importantes y bien conocidos son los siguientes:
· En optimización de inversiones se pretende minimizar los riesgos y maximizar los beneficios.
· En la programación de viajes se quiere reducir el tiempo de viaje y los costes.
· En el diseño de circuitos se quiere reducir al mÃnimo la zona ocupada del circuito, el consumo de energÃa y maximizar la
velocidad.
· En los problemas de la mochila se quiere minimizar el peso de la carga y/o el volumen y maximizar su valor económico.
Los ejemplos anteriores muestran que, en muchos casos, estos criterios son inconmensurables (es decir, es difÃcil o imposible
combinar todos ellos en un único criterio) y están en conflicto (es decir, soluciones que son buenas con respecto a un criterio es
probable que sean malas con respecto a otra). Tener en cuenta de forma simultánea todos estos criterios no es trivial y para ello se
han propuesto diferentes nociones de optimalidad. Independientemente del concepto de optimalidad elegido, el cómputo de
soluciones óptimas representa un importante desafÃo para la investigación actual.
Los modelos gráficos son una herramienta para la represetanción del conocimiento ampliamente utilizados en el campo de la
Inteligencia Artificial que parecen especialmente indicados en problemas combinatorios. A grandes rasgos, los modelos gráficos son
grafos en los que los nodos representan variables y la (falta de) arcos representa la interdepencia entre variables. Además de la
estructura gráfica, es necesario especificar su (micro-estructura) que indica cómo interactúan instanciaciones concretas de variables
interdependientes. Los modelos gráficos proporcionan un marco capaz de unificar el modelado de un espectro amplio de sistemas y
un conjunto de algoritmos generales capaces de resolverlos eficientemente.
En esta tesis integramos problemas de optimización multi-objetivo en el contexto de los modelos gráficos y estudiamos cómo
diversas técnicas algorÃtmicas desarrolladas dentro del marco de los modelos gráficos se pueden extender a problemas de
optimización multi-objetivo. Como mostramos, este tipo de problemas se pueden formalizar como un caso particular de modelo
gráfico usando el paradigma basado en semi-anillos (SCSP). Desde nuestro conocimiento, ésta es la primera vez que los modelos
gráficos en general, y el paradigma basado en semi-anillos en particular, se usan para modelar un problema de optimización cuya
función objetivo está parcialmente ordenada. Además, mostramos que la mayorÃa de técnicas para resolver problemas monoobjetivo
se pueden extender de forma natural al contexto multi-objetivo. El resultado de nuestro trabajo es la formalización
matemática de problemas de optimización multi-objetivo y el desarrollo de un conjunto de algoritmos capaces de resolver este tipo
de problemas. Además, demostramos que estos algoritmos son eficientes en un conjunto determinado de benchmarks
Certifying solution geometry in random CSPs: counts, clusters and balance
An active topic in the study of random constraint satisfaction problems
(CSPs) is the geometry of the space of satisfying or almost satisfying
assignments as the function of the density, for which a precise landscape of
predictions has been made via statistical physics-based heuristics. In
parallel, there has been a recent flurry of work on refuting random constraint
satisfaction problems, via nailing refutation thresholds for spectral and
semidefinite programming-based algorithms, and also on counting solutions to
CSPs. Inspired by this, the starting point for our work is the following
question: what does the solution space for a random CSP look like to an
efficient algorithm?
In pursuit of this inquiry, we focus on the following problems about random
Boolean CSPs at the densities where they are unsatisfiable but no refutation
algorithm is known.
1. Counts. For every Boolean CSP we give algorithms that with high
probability certify a subexponential upper bound on the number of solutions. We
also give algorithms to certify a bound on the number of large cuts in a
Gaussian-weighted graph, and the number of large independent sets in a random
-regular graph.
2. Clusters. For Boolean CSPs we give algorithms that with high
probability certify an upper bound on the number of clusters of solutions.
3. Balance. We also give algorithms that with high probability certify that
there are no "unbalanced" solutions, i.e., solutions where the fraction of
s deviates significantly from .
Finally, we also provide hardness evidence suggesting that our algorithms for
counting are optimal
Randomized approximation algorithms : facility location, phylogenetic networks, Nash equilibria
Despite a great effort, researchers are unable to find efficient algorithms for a number of natural computational problems. Typically, it is possible to emphasize the hardness of such problems by proving that they are at least as hard as a number of other problems. In the language of computational complexity it means proving that the problem is complete for a certain class of problems. For optimization problems, we may consider to relax the requirement of the outcome to be optimal and accept an approximate (i.e., close to optimal) solution. For many of the problems that are hard to solve optimally, it is actually possible to efficiently find close to optimal solutions. In this thesis, we study algorithms for computing such approximate solutions