2,403 research outputs found
Inapproximability of Combinatorial Optimization Problems
We survey results on the hardness of approximating combinatorial optimization
problems
The Complexity of Rationalizing Network Formation
We study the complexity of rationalizing network formation. In this problem we fix an underlying model describing how selfish parties (the vertices) produce a graph by making individual decisions to form or not form incident edges. The model is equipped with a notion of stability (or equilibrium), and we observe a set of "snapshots" of graphs that are assumed to be stable. From this we would like to infer some unobserved data about the system: edge prices, or how much each vertex values short paths to each other vertex. We study two rationalization problems arising from the network formation model of Jackson and Wolinsky [14]. When the goal is to infer edge prices, we observe that the rationalization problem is easy. The problem remains easy even when rationalizing prices do not exist and we instead wish to find prices that maximize the stability of the system. In contrast, when the edge prices are given and the goal is instead to infer valuations of each vertex by each other vertex, we prove that the rationalization problem becomes NP-hard. Our proof exposes a close connection between rationalization problems and the Inequality-SAT (I-SAT) problem. Finally and most significantly, we prove that an approximation version of this NP-complete rationalization problem is NP-hard to approximate to within better than a 1/2 ratio. This shows that the trivial algorithm of setting everyone's valuations to infinity (which rationalizes all the edges present in the input graphs) or to zero (which rationalizes all the non-edges present in the input graphs) is the best possible assuming P ≠ NP To do this we prove a tight (1/2 + δ) -approximation hardness for a variant of I-SAT in which all coefficients are non-negative. This in turn follows from a tight hardness result for MAX-LlN_(R_+) (linear equations over the reals, with non-negative coefficients), which we prove by a (non-trivial) modification of the recent result of Guruswami and Raghavendra [10] which achieved tight hardness for this problem without the non-negativity constraint. Our technical contributions regarding the hardness of I-SAT and MAX-LIN_(R_+) may be of independent interest, given the generality of these problem
Complexity of Discrete Energy Minimization Problems
Discrete energy minimization is widely-used in computer vision and machine
learning for problems such as MAP inference in graphical models. The problem,
in general, is notoriously intractable, and finding the global optimal solution
is known to be NP-hard. However, is it possible to approximate this problem
with a reasonable ratio bound on the solution quality in polynomial time? We
show in this paper that the answer is no. Specifically, we show that general
energy minimization, even in the 2-label pairwise case, and planar energy
minimization with three or more labels are exp-APX-complete. This finding rules
out the existence of any approximation algorithm with a sub-exponential
approximation ratio in the input size for these two problems, including
constant factor approximations. Moreover, we collect and review the
computational complexity of several subclass problems and arrange them on a
complexity scale consisting of three major complexity classes -- PO, APX, and
exp-APX, corresponding to problems that are solvable, approximable, and
inapproximable in polynomial time. Problems in the first two complexity classes
can serve as alternative tractable formulations to the inapproximable ones.
This paper can help vision researchers to select an appropriate model for an
application or guide them in designing new algorithms.Comment: ECCV'16 accepte
Inapproximability of maximal strip recovery
In comparative genomic, the first step of sequence analysis is usually to
decompose two or more genomes into syntenic blocks that are segments of
homologous chromosomes. For the reliable recovery of syntenic blocks, noise and
ambiguities in the genomic maps need to be removed first. Maximal Strip
Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff
for reliably recovering syntenic blocks from genomic maps in the midst of noise
and ambiguities. Given genomic maps as sequences of gene markers, the
objective of \msr{d} is to find subsequences, one subsequence of each
genomic map, such that the total length of syntenic blocks in these
subsequences is maximized. For any constant , a polynomial-time
2d-approximation for \msr{d} was previously known. In this paper, we show that
for any , \msr{d} is APX-hard, even for the most basic version of the
problem in which all gene markers are distinct and appear in positive
orientation in each genomic map. Moreover, we provide the first explicit lower
bounds on approximating \msr{d} for all . In particular, we show that
\msr{d} is NP-hard to approximate within . From the other
direction, we show that the previous 2d-approximation for \msr{d} can be
optimized into a polynomial-time algorithm even if is not a constant but is
part of the input. We then extend our inapproximability results to several
related problems including \cmsr{d}, \gapmsr{\delta}{d}, and
\gapcmsr{\delta}{d}.Comment: A preliminary version of this paper appeared in two parts in the
Proceedings of the 20th International Symposium on Algorithms and Computation
(ISAAC 2009) and the Proceedings of the 4th International Frontiers of
Algorithmics Workshop (FAW 2010
On largest volume simplices and sub-determinants
We show that the problem of finding the simplex of largest volume in the
convex hull of points in can be approximated with a factor
of in polynomial time. This improves upon the previously best
known approximation guarantee of by Khachiyan. On the other hand,
we show that there exists a constant such that this problem cannot be
approximated with a factor of , unless . % This improves over the
inapproximability that was previously known. Our hardness result holds
even if , in which case there exists a \bar c\,^{d}-approximation
algorithm that relies on recent sampling techniques, where is again a
constant. We show that similar results hold for the problem of finding the
largest absolute value of a subdeterminant of a matrix
From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More
We consider questions that arise from the intersection between the areas of
polynomial-time approximation algorithms, subexponential-time algorithms, and
fixed-parameter tractable algorithms. The questions, which have been asked
several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a
non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and
Minimum Dominating Set (DomSet) problems parameterized by the size of the
optimal solution. In particular, letting be the optimum and be
the size of the input, is there an algorithm that runs in
time and outputs a solution of size
, for any functions and that are independent of (for
Clique, we want )?
In this paper, we show that both Clique and DomSet admit no non-trivial
FPT-approximation algorithm, i.e., there is no
-FPT-approximation algorithm for Clique and no
-FPT-approximation algorithm for DomSet, for any function
(e.g., this holds even if is the Ackermann function). In fact, our results
imply something even stronger: The best way to solve Clique and DomSet, even
approximately, is to essentially enumerate all possibilities. Our results hold
under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which
states that no -time algorithm can distinguish between a satisfiable
3SAT formula and one which is not even -satisfiable for some
constant .
Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for
Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and
Maximum Induced Matching in bipartite graphs. Additionally, we rule out
-FPT-approximation algorithm for Densest -Subgraph although this
ratio does not yet match the trivial -approximation algorithm.Comment: 43 pages. To appear in FOCS'1
Approximating solution structure of the Weighted Sentence Alignment problem
We study the complexity of approximating solution structure of the bijective
weighted sentence alignment problem of DeNero and Klein (2008). In particular,
we consider the complexity of finding an alignment that has a significant
overlap with an optimal alignment. We discuss ways of representing the solution
for the general weighted sentence alignment as well as phrases-to-words
alignment problem, and show that computing a string which agrees with the
optimal sentence partition on more than half (plus an arbitrarily small
polynomial fraction) positions for the phrases-to-words alignment is NP-hard.
For the general weighted sentence alignment we obtain such bound from the
agreement on a little over 2/3 of the bits. Additionally, we generalize the
Hamming distance approximation of a solution structure to approximating it with
respect to the edit distance metric, obtaining similar lower bounds
On the Hardness and Inapproximability of Recognizing Wheeler Graphs
In recent years several compressed indexes based on variants of the Burrows-Wheeler transformation have been introduced. Some of these are used to index structures far more complex than a single string, as was originally done with the FM-index [Ferragina and Manzini, J. ACM 2005]. As such, there has been an increasing effort to better understand under which conditions such an indexing scheme is possible. This has led to the introduction of Wheeler graphs [Gagie et al., Theor. Comput. Sci., 2017]. Gagie et al. showed that de Bruijn graphs, generalized compressed suffix arrays, and several other BWT related structures can be represented as Wheeler graphs, and that Wheeler graphs can be indexed in a way which is space efficient. Hence, being able to recognize whether a given graph is a Wheeler graph, or being able to approximate a given graph by a Wheeler graph, could have numerous applications in indexing. Here we resolve the open question of whether there exists an efficient algorithm for recognizing if a given graph is a Wheeler graph. We present:
- The problem of recognizing whether a given graph G=(V,E) is a Wheeler graph is NP-complete for any edge label alphabet of size sigma >= 2, even when G is a DAG. This holds even on a restricted, subset of graphs called d-NFA\u27s for d >= 5. This is in contrast to recent results demonstrating the problem can be solved in polynomial time for d-NFA\u27s where d <= 2. We also show the recognition problem can be solved in linear time for sigma =1;
- There exists an 2^{e log sigma + O(n + e)} time exact algorithm where n = |V| and e = |E|. This algorithm relies on graph isomorphism being computable in strictly sub-exponential time;
- We define an optimization variant of the problem called Wheeler Graph Violation, abbreviated WGV, where the aim is to remove the minimum number of edges in order to obtain a Wheeler graph. We show WGV is APX-hard, even when G is a DAG, implying there exists a constant C >= 1 for which there is no C-approximation algorithm (unless P = NP). Also, conditioned on the Unique Games Conjecture, for all C >= 1, it is NP-hard to find a C-approximation;
- We define the Wheeler Subgraph problem, abbreviated WS, where the aim is to find the largest subgraph which is a Wheeler Graph (the dual of the WGV). In contrast to WGV, we prove that the WS problem is in APX for sigma=O(1);
The above findings suggest that most problems under this theme are computationally difficult. However, we identify a class of graphs for which the recognition problem is polynomial time solvable, raising the open question of which parameters determine this problem\u27s difficulty
- …