2,462 research outputs found

    Half-integrality, LP-branching and FPT Algorithms

    Full text link
    A recent trend in parameterized algorithms is the application of polytope tools (specifically, LP-branching) to FPT algorithms (e.g., Cygan et al., 2011; Narayanaswamy et al., 2012). However, although interesting results have been achieved, the methods require the underlying polytope to have very restrictive properties (half-integrality and persistence), which are known only for few problems (essentially Vertex Cover (Nemhauser and Trotter, 1975) and Node Multiway Cut (Garg et al., 1994)). Taking a slightly different approach, we view half-integrality as a \emph{discrete} relaxation of a problem, e.g., a relaxation of the search space from {0,1}V\{0,1\}^V to {0,1/2,1}V\{0,1/2,1\}^V such that the new problem admits a polynomial-time exact solution. Using tools from CSP (in particular Thapper and \v{Z}ivn\'y, 2012) to study the existence of such relaxations, we provide a much broader class of half-integral polytopes with the required properties, unifying and extending previously known cases. In addition to the insight into problems with half-integral relaxations, our results yield a range of new and improved FPT algorithms, including an O(Σ2k)O^*(|\Sigma|^{2k})-time algorithm for node-deletion Unique Label Cover with label set Σ\Sigma and an O(4k)O^*(4^k)-time algorithm for Group Feedback Vertex Set, including the setting where the group is only given by oracle access. All these significantly improve on previous results. The latter result also implies the first single-exponential time FPT algorithm for Subset Feedback Vertex Set, answering an open question of Cygan et al. (2012). Additionally, we propose a network flow-based approach to solve some cases of the relaxation problem. This gives the first linear-time FPT algorithm to edge-deletion Unique Label Cover.Comment: Added results on linear-time FPT algorithms (not present in SODA paper

    Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints

    Full text link
    We investigate two new optimization problems -- minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of real-world applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost). These problems are often posed as minimizing the difference between submodular functions [14, 35] which is in the worst case inapproximable. We show, however, that by phrasing these problems as constrained optimization, which is more natural for many applications, we achieve a number of bounded approximation guarantees. We also show that both these problems are closely related and an approximation algorithm solving one can be used to obtain an approximation guarantee for the other. We provide hardness results for both problems thus showing that our approximation factors are tight up to log-factors. Finally, we empirically demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201

    Optimal Approximation Algorithms for Multi-agent Combinatorial Problems with Discounted Price Functions

    Full text link
    Submodular functions are an important class of functions in combinatorial optimization which satisfy the natural properties of decreasing marginal costs. The study of these functions has led to strong structural properties with applications in many areas. Recently, there has been significant interest in extending the theory of algorithms for optimizing combinatorial problems (such as network design problem of spanning tree) over submodular functions. Unfortunately, the lower bounds under the general class of submodular functions are known to be very high for many of the classical problems. In this paper, we introduce and study an important subclass of submodular functions, which we call discounted price functions. These functions are succinctly representable and generalize linear cost functions. In this paper we study the following fundamental combinatorial optimization problems: Edge Cover, Spanning Tree, Perfect Matching and Shortest Path, and obtain tight upper and lower bounds for these problems. The main technical contribution of this paper is designing novel adaptive greedy algorithms for the above problems. These algorithms greedily build the solution whist rectifying mistakes made in the previous steps

    Approximating Source Location and Star Survivable Network Problems

    Full text link
    In Source Location (SL) problems the goal is to select a mini-mum cost source set SVS \subseteq V such that the connectivity (or flow) ψ(S,v)\psi(S,v) from SS to any node vv is at least the demand dvd_v of vv. In many SL problems ψ(S,v)=dv\psi(S,v)=d_v if vSv \in S, namely, the demand of nodes selected to SS is completely satisfied. In a node-connectivity variant suggested recently by Fukunaga, every node vv gets a "bonus" pvdvp_v \leq d_v if it is selected to SS. Fukunaga showed that for undirected graphs one can achieve ratio O(klnk)O(k \ln k) for his variant, where k=maxvVdvk=\max_{v \in V}d_v is the maximum demand. We improve this by achieving ratio \min\{p^*\lnk,k\}\cdot O(\ln (k/q^*)) for a more general version with node capacities, where p=maxvVpvp^*=\max_{v \in V} p_v is the maximum bonus and q=minvVqvq^*=\min_{v \in V} q_v is the minimum capacity. In particular, for the most natural case p=1p^*=1 considered by Fukunaga, we improve the ratio from O(klnk)O(k \ln k) to O(ln2k)O(\ln^2k). We also get ratio O(k)O(k) for the edge-connectivity version, for which no ratio that depends on kk only was known before. To derive these results, we consider a particular case of the Survivable Network (SN) problem when all edges of positive cost form a star. We give ratio O(min{lnn,ln2k})O(\min\{\ln n,\ln^2 k\}) for this variant, improving over the best ratio known for the general case O(k3lnn)O(k^3 \ln n) of Chuzhoy and Khanna

    Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions

    Full text link
    We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAC-like setting [53]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the 'curvature' of the submodular function, and provide lower and upper bounds that refine and improve previous results [3, 16, 18, 52]. Our proof techniques are fairly generic. We either use a black-box transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [7, 55], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.Comment: 21 pages. A shorter version appeared in Advances of NIPS-201

    Ranking with Submodular Valuations

    Full text link
    We study the problem of ranking with submodular valuations. An instance of this problem consists of a ground set [m][m], and a collection of nn monotone submodular set functions f1,,fnf^1, \ldots, f^n, where each fi:2[m]R+f^i: 2^{[m]} \to R_+. An additional ingredient of the input is a weight vector wR+nw \in R_+^n. The objective is to find a linear ordering of the ground set elements that minimizes the weighted cover time of the functions. The cover time of a function is the minimal number of elements in the prefix of the linear ordering that form a set whose corresponding function value is greater than a unit threshold value. Our main contribution is an O(ln(1/ϵ))O(\ln(1 / \epsilon))-approximation algorithm for the problem, where ϵ\epsilon is the smallest non-zero marginal value that any function may gain from some element. Our algorithm orders the elements using an adaptive residual updates scheme, which may be of independent interest. We also prove that the problem is Ω(ln(1/ϵ))\Omega(\ln(1 / \epsilon))-hard to approximate, unless P = NP. This implies that the outcome of our algorithm is optimal up to constant factors.Comment: 16 pages, 3 figure
    corecore