13 research outputs found
Approximating submodular -partition via principal partition sequence
In submodular -partition, the input is a non-negative submodular function
defined over a finite ground set (given by an evaluation oracle) along
with a positive integer and the goal is to find a partition of the ground
set into non-empty parts in order to minimize
. Narayanan, Roy, and Patkar (Journal of Algorithms, 1996)
designed an algorithm for submodular -partition based on the principal
partition sequence and showed that the approximation factor of their algorithm
is for the special case of graph cut functions (subsequently rediscovered
by Ravi and Sinha (Journal of Operational Research, 2008)). In this work, we
study the approximation factor of their algorithm for three subfamilies of
submodular functions -- monotone, symmetric, and posimodular, and show the
following results:
1. The approximation factor of their algorithm for monotone submodular
-partition is . This result improves on the -factor achievable via
other algorithms. Moreover, our upper bound of matches the recently shown
lower bound under polynomial number of function evaluation queries (Santiago,
IWOCA 2021). Our upper bound of is also the first improvement beyond
for a certain graph partitioning problem that is a special case of monotone
submodular -partition.
2. The approximation factor of their algorithm for symmetric submodular
-partition is . This result generalizes their approximation factor
analysis beyond graph cut functions.
3. The approximation factor of their algorithm for posimodular submodular
-partition is .
We also construct an example to show that the approximation factor of their
algorithm for arbitrary submodular functions is .Comment: Accepted to APPROX'2
Matching, matroid, and traveling salesman problem
研究成果の概要 (和文) : 巡回セールスマン問題 (TSP) は,おそらくもっとも有名な NP 困難な問題であり,TSPに対して提案された数々の手法は,離散最適化の分野全体の発展に大いに寄与してきた.特に近年,TSPに対する理論的なブレイクスルーといえる研究が数多く発表されている.本研究は,TSPへの応用を念頭に置き,離散最適化問題の効率的な解法の基礎をなす理論であるマッチング理論およびマトロイド理論の深化と拡大を行った.本研究で発表した 20篇の論文はすべて,最適化分野のトップジャーナル・トップカンファレンスを含む,定評のある査読付き国際論文誌に採録,または査読付き国際会議に採択されている.研究成果の概要 (英文) : The traveling salesman problem (TSP) is perhaps the most famous NP-hard problem, and has enhanced developments of many methods in the field of discrete optimization. In particular, TSP attracts recent intensive attention: several theoretical breakthrough papers have been published in this past decade.Our research has intended to be applied in theoretical improvement in solving TSP. Specifically, our research has achieved deepening and extending of matching theory and matroid theory, which form bases of efficient solutions to discrete optimization problems. All of our 20 papers has been accepted to reputable, international, peer-reviewed journals or conferences, including top journals and conferences in the field of optimization
The complexity of Boolean surjective general-valued CSPs
Valued constraint satisfaction problems (VCSPs) are discrete optimisation
problems with a -valued objective function given as
a sum of fixed-arity functions. In Boolean surjective VCSPs, variables take on
labels from and an optimal assignment is required to use both
labels from . Examples include the classical global Min-Cut problem in
graphs and the Minimum Distance problem studied in coding theory.
We establish a dichotomy theorem and thus give a complete complexity
classification of Boolean surjective VCSPs with respect to exact solvability.
Our work generalises the dichotomy for -valued constraint
languages (corresponding to surjective decision CSPs) obtained by Creignou and
H\'ebrard. For the maximisation problem of -valued
surjective VCSPs, we also establish a dichotomy theorem with respect to
approximability.
Unlike in the case of Boolean surjective (decision) CSPs, there appears a
novel tractable class of languages that is trivial in the non-surjective
setting. This newly discovered tractable class has an interesting mathematical
structure related to downsets and upsets. Our main contribution is identifying
this class and proving that it lies on the borderline of tractability. A
crucial part of our proof is a polynomial-time algorithm for enumerating all
near-optimal solutions to a generalised Min-Cut problem, which might be of
independent interest.Comment: v5: small corrections and improved presentatio
Posimodular Function Optimization
Given a posimodular function on a finite set , we
consider the problem of finding a nonempty subset of that minimizes
. Posimodular functions often arise in combinatorial optimization such as
undirected cut functions. In this paper, we show that any algorithm for the
problem requires oracle calls to , where
. It contrasts to the fact that the submodular function minimization,
which is another generalization of cut functions, is polynomially solvable.
When the range of a given posimodular function is restricted to be
for some nonnegative integer , we show that
oracle calls are necessary, while we propose an
-time algorithm for the problem. Here, denotes the
time needed to evaluate the function value for a given .
We also consider the problem of maximizing a given posimodular function. We
show that oracle calls are necessary for solving the problem,
and that the problem has time complexity when
is the range of for some constant .Comment: 18 page
Learning with Submodular Functions: A Convex Optimization Perspective
International audienceSubmodular functions are relevant to machine learning for at least two reasons: (1) some problems may be expressed directly as the optimization of submodular functions and (2) the lovasz extension of submodular functions provides a useful set of regularization functions for supervised and unsupervised learning. In this monograph, we present the theory of submodular functions from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems. In particular, we show how submodular function minimization is equivalent to solving a wide variety of convex optimization problems. This allows the derivation of new efficient algorithms for approximate and exact submodular function minimization with theoretical guarantees and good practical performance. By listing many examples of submodular functions, we review various applications to machine learning, such as clustering, experimental design, sensor placement, graphical model structure learning or subset selection, as well as a family of structured sparsity-inducing norms that can be derived and used from submodular functions