13 research outputs found

    Approximating submodular kk-partition via principal partition sequence

    Full text link
    In submodular kk-partition, the input is a non-negative submodular function ff defined over a finite ground set VV (given by an evaluation oracle) along with a positive integer kk and the goal is to find a partition of the ground set VV into kk non-empty parts V1,V2,...,VkV_1, V_2, ..., V_k in order to minimize i=1kf(Vi)\sum_{i=1}^k f(V_i). Narayanan, Roy, and Patkar (Journal of Algorithms, 1996) designed an algorithm for submodular kk-partition based on the principal partition sequence and showed that the approximation factor of their algorithm is 22 for the special case of graph cut functions (subsequently rediscovered by Ravi and Sinha (Journal of Operational Research, 2008)). In this work, we study the approximation factor of their algorithm for three subfamilies of submodular functions -- monotone, symmetric, and posimodular, and show the following results: 1. The approximation factor of their algorithm for monotone submodular kk-partition is 4/34/3. This result improves on the 22-factor achievable via other algorithms. Moreover, our upper bound of 4/34/3 matches the recently shown lower bound under polynomial number of function evaluation queries (Santiago, IWOCA 2021). Our upper bound of 4/34/3 is also the first improvement beyond 22 for a certain graph partitioning problem that is a special case of monotone submodular kk-partition. 2. The approximation factor of their algorithm for symmetric submodular kk-partition is 22. This result generalizes their approximation factor analysis beyond graph cut functions. 3. The approximation factor of their algorithm for posimodular submodular kk-partition is 22. We also construct an example to show that the approximation factor of their algorithm for arbitrary submodular functions is Ω(n/k)\Omega(n/k).Comment: Accepted to APPROX'2

    Approximating Submodular k-Partition via Principal Partition Sequence

    Get PDF

    ?_p-Norm Multiway Cut

    Get PDF

    Matching, matroid, and traveling salesman problem

    Get PDF
    研究成果の概要 (和文) : 巡回セールスマン問題 (TSP) は,おそらくもっとも有名な NP 困難な問題であり,TSPに対して提案された数々の手法は,離散最適化の分野全体の発展に大いに寄与してきた.特に近年,TSPに対する理論的なブレイクスルーといえる研究が数多く発表されている.本研究は,TSPへの応用を念頭に置き,離散最適化問題の効率的な解法の基礎をなす理論であるマッチング理論およびマトロイド理論の深化と拡大を行った.本研究で発表した 20篇の論文はすべて,最適化分野のトップジャーナル・トップカンファレンスを含む,定評のある査読付き国際論文誌に採録,または査読付き国際会議に採択されている.研究成果の概要 (英文) : The traveling salesman problem (TSP) is perhaps the most famous NP-hard problem, and has enhanced developments of many methods in the field of discrete optimization. In particular, TSP attracts recent intensive attention: several theoretical breakthrough papers have been published in this past decade.Our research has intended to be applied in theoretical improvement in solving TSP. Specifically, our research has achieved deepening and extending of matching theory and matroid theory, which form bases of efficient solutions to discrete optimization problems. All of our 20 papers has been accepted to reputable, international, peer-reviewed journals or conferences, including top journals and conferences in the field of optimization

    The complexity of Boolean surjective general-valued CSPs

    Full text link
    Valued constraint satisfaction problems (VCSPs) are discrete optimisation problems with a (Q{})(\mathbb{Q}\cup\{\infty\})-valued objective function given as a sum of fixed-arity functions. In Boolean surjective VCSPs, variables take on labels from D={0,1}D=\{0,1\} and an optimal assignment is required to use both labels from DD. Examples include the classical global Min-Cut problem in graphs and the Minimum Distance problem studied in coding theory. We establish a dichotomy theorem and thus give a complete complexity classification of Boolean surjective VCSPs with respect to exact solvability. Our work generalises the dichotomy for {0,}\{0,\infty\}-valued constraint languages (corresponding to surjective decision CSPs) obtained by Creignou and H\'ebrard. For the maximisation problem of Q0\mathbb{Q}_{\geq 0}-valued surjective VCSPs, we also establish a dichotomy theorem with respect to approximability. Unlike in the case of Boolean surjective (decision) CSPs, there appears a novel tractable class of languages that is trivial in the non-surjective setting. This newly discovered tractable class has an interesting mathematical structure related to downsets and upsets. Our main contribution is identifying this class and proving that it lies on the borderline of tractability. A crucial part of our proof is a polynomial-time algorithm for enumerating all near-optimal solutions to a generalised Min-Cut problem, which might be of independent interest.Comment: v5: small corrections and improved presentatio

    Posimodular Function Optimization

    Full text link
    Given a posimodular function f:2VRf: 2^V \to \mathbb{R} on a finite set VV, we consider the problem of finding a nonempty subset XX of VV that minimizes f(X)f(X). Posimodular functions often arise in combinatorial optimization such as undirected cut functions. In this paper, we show that any algorithm for the problem requires Ω(2n7.54)\Omega(2^{\frac{n}{7.54}}) oracle calls to ff, where n=Vn=|V|. It contrasts to the fact that the submodular function minimization, which is another generalization of cut functions, is polynomially solvable. When the range of a given posimodular function is restricted to be D={0,1,...,d}D=\{0,1,...,d\} for some nonnegative integer dd, we show that Ω(2d15.08)\Omega(2^{\frac{d}{15.08}}) oracle calls are necessary, while we propose an O(ndTf+n2d+1)O(n^dT_f+n^{2d+1})-time algorithm for the problem. Here, TfT_f denotes the time needed to evaluate the function value f(X)f(X) for a given XVX \subseteq V. We also consider the problem of maximizing a given posimodular function. We show that Ω(2n1)\Omega(2^{n-1}) oracle calls are necessary for solving the problem, and that the problem has time complexity Θ(nd1Tf)\Theta(n^{d-1}T_f) when D={0,1,...,d}D=\{0,1,..., d\} is the range of ff for some constant dd.Comment: 18 page

    Learning with Submodular Functions: A Convex Optimization Perspective

    Get PDF
    International audienceSubmodular functions are relevant to machine learning for at least two reasons: (1) some problems may be expressed directly as the optimization of submodular functions and (2) the lovasz extension of submodular functions provides a useful set of regularization functions for supervised and unsupervised learning. In this monograph, we present the theory of submodular functions from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems. In particular, we show how submodular function minimization is equivalent to solving a wide variety of convex optimization problems. This allows the derivation of new efficient algorithms for approximate and exact submodular function minimization with theoretical guarantees and good practical performance. By listing many examples of submodular functions, we review various applications to machine learning, such as clustering, experimental design, sensor placement, graphical model structure learning or subset selection, as well as a family of structured sparsity-inducing norms that can be derived and used from submodular functions
    corecore