40,501 research outputs found

    Computing a Minimum-Dilation Spanning Tree is NP-hard

    Get PDF
    In a geometric network G = (S, E), the graph distance between two vertices u, v in S is the length of the shortest path in G connecting u to v. The dilation of G is the maximum factor by which the graph distance of a pair of vertices differs from their Euclidean distance. We show that given a set S of n points with integer coordinates in the plane and a rational dilation delta > 1, it is NP-hard to determine whether a spanning tree of S with dilation at most delta exists

    Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard

    Full text link
    Maximum-likelihood decoding is one of the central algorithmic problems in coding theory. It has been known for over 25 years that maximum-likelihood decoding of general linear codes is NP-hard. Nevertheless, it was so far unknown whether maximum- likelihood decoding remains hard for any specific family of codes with nontrivial algebraic structure. In this paper, we prove that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes remains hard even with unlimited preprocessing, thereby strengthening a result of Bruck and Naor.Comment: 16 pages, no figure

    Approximating the least hypervolume contributor: NP-hard in general, but fast in practice

    Get PDF
    The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most (1+\eps) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P=NPP = NP) nor to approximate it (unless NP=BPPNP = BPP). Nevertheless, in the second part of the paper we present a fast approximation algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0 it calculates a solution with contribution at most (1+\eps) times the minimal contribution with probability at least (1āˆ’Ī“)(1-\delta). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc

    Computational Complexity in Additive Hedonic Games

    Get PDF
    We investigate the computational complexity of several decision problems in hedonic coalition formation games and demonstrate that attaining stability in such games remains NP-hard even when they are additive. Precisely, we prove that when either core stability or strict core stability is under consideration, the existence problem of a stable coalition structure is NP-hard in the strong sense. Furthermore, the corresponding decision problems with respect to the existence of a Nash stable coalition structure and of an individually stable coalition structure turn out to be NP-complete in the strong sense.Additive Preferences, Coalition Formation, Computational Complexity, Hedonic Games, NP-hard, NP-complete

    The algorithm by Ferson et al. is surprisingly fast: An NP-hard optimization problem solvable in almost linear time with high probability

    Full text link
    We start with the algorithm of Ferson et al. (\emph{Reliable computing} {\bf 11}(3), p.~207--233, 2005), designed for solving a certain NP-hard problem motivated by robust statistics. First, we propose an efficient implementation of the algorithm and improve its complexity bound to O(nlogā”n+nā‹…2Ļ‰)O(n \log n+n\cdot 2^\omega), where Ļ‰\omega is the clique number in a certain intersection graph. Then we treat input data as random variables (as it is usual in statistics) and introduce a natural probabilistic data generating model. On average, we get 2Ļ‰=O(n1/logā”logā”n)2^\omega = O(n^{1/\log\log n}) and Ļ‰=O(logā”n/logā”logā”n)\omega = O(\log n / \log\log n). This results in average computing time O(n1+Ļµ)O(n^{1+\epsilon}) for Ļµ>0\epsilon > 0 arbitrarily small, which may be considered as ``surprisingly good'' average time complexity for solving an NP-hard problem. Moreover, we prove the following tail bound on the distribution of computation time: ``hard'' instances, forcing the algorithm to compute in time 2Ī©(n)2^{\Omega(n)}, occur rarely, with probability tending to zero faster than exponentially with nā†’āˆžn \rightarrow \infty
    • ā€¦
    corecore