44 research outputs found

### Deterministic Primal-Dual Algorithms for Online k-way Matching with Delays

In this paper, we study the Min-cost Perfect $k$-way Matching with Delays
($k$-MPMD), recently introduced by Melnyk et al. In the problem, $m$ requests
arrive one-by-one over time in a metric space. At any time, we can irrevocably
make a group of $k$ requests who arrived so far, that incurs the distance cost
among the $k$ requests in addition to the sum of the waiting cost for the $k$
requests. The goal is to partition all the requests into groups of $k$
requests, minimizing the total cost. The problem is a generalization of the
min-cost perfect matching with delays (corresponding to $2$-MPMD). It is known
that no online algorithm for $k$-MPMD can achieve a bounded competitive ratio
in general, where the competitive ratio is the worst-case ratio between its
performance and the offline optimal value. On the other hand, $k$-MPMD is known
to admit a randomized online algorithm with competitive ratio $O(k^{5}\log n)$
for a certain class of $k$-point metrics called the $H$-metric, where $n$ is
the size of the metric space. In this paper, we propose a deterministic online
algorithm with a competitive ratio of $O(mk^2)$ for the $k$-MPMD in $H$-metric
space. Furthermore, we show that the competitive ratio can be improved to $O(m
+ k^2)$ if the metric is given as a diameter on a line

### Market Pricing for Matroid Rank Valuations

In this paper, we study the problem of maximizing social welfare in
combinatorial markets through pricing schemes. We consider the existence of
prices that are capable to achieve optimal social welfare without a central
tie-breaking coordinator. In the case of two buyers with rank valuations, we
give polynomial-time algorithms that always find such prices when one of the
matroids is a simple partition matroid or both matroids are strongly base
orderable. This result partially answers a question raised by D\"uetting and
V\'egh in 2017. We further formalize a weighted variant of the conjecture of
D\"uetting and V\'egh, and show that the weighted variant can be reduced to the
unweighted one based on the weight-splitting theorem for weighted matroid
intersection by Frank. We also show that a similar reduction technique works
for M${}^\natural$-concave functions, or equivalently, gross substitutes
functions

### Set Covering with Ordered Replacement -- Additive and Multiplicative Gaps

We consider set covering problems where the underlying set system satisfies a
particular replacement property w.r.t. a given partial order on the elements:
Whenever a set is in the set system then a set stemming from it via the
replacement of an element by a smaller element is also in the set system. Many
variants of BIN PACKING that have appeared in the literature are such set
covering problems with ordered replacement. We provide a rigorous account on
the additive and multiplicative integrality gap and approximability of set
covering with replacement. In particular we provide a polylogarithmic upper
bound on the additive integrality gap that also yields a polynomial time
additive approximation algorithm if the linear programming relaxation can be
efficiently solved. We furthermore present an extensive list of covering
problems that fall into our framework and consequently have polylogarithmic
additive gaps as well

### Complexity of the Multi-Service Center Problem

The multi-service center problem is a variant of facility location problems. In the problem, we consider locating p facilities on a graph, each of which provides distinct service required by all vertices. Each vertex incurs the cost determined by the sum of the weighted distances to the p facilities. The aim of the problem is to minimize the maximum cost among all vertices. This problem is known to be NP-hard for general graphs, while it is solvable in polynomial time when p is a fixed constant. In this paper, we give sharp analyses for the complexity of the problem from the viewpoint of graph classes and weights on vertices. We first propose a polynomial-time algorithm for trees when p is a part of input. In contrast, we prove that the problem becomes strongly NP-hard even for cycles. We also show that when vertices are allowed to have negative weights, the problem becomes NP-hard for paths of only three vertices and strongly NP-hard for stars

### Parameterized Complexity of Sparse Linear Complementarity Problems

In this paper, we study the parameterized complexity of the linear complementarity problem (LCP), which is one of the most fundamental mathematical optimization problems. The parameters we focus on are the sparsities of the input and the output of the LCP: the maximum numbers of nonzero entries per row/column in the coefficient matrix and the number of nonzero entries in a solution. Our main result is to present a fixed-parameter algorithm for the LCP with all the parameters. We also show that if we drop any of the three parameters, then the LCP is fixed-parameter intractable.
In addition, we discuss the nonexistence of a polynomial kernel for the LCP

### Shortest Reconfiguration of Perfect Matchings via Alternating Cycles

Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar

### Streaming Algorithms for Maximizing Monotone Submodular Functions under a Knapsack Constraint

In this paper, we consider the problem of maximizing a monotone submodular function subject to a knapsack constraint in the streaming setting. In particular, the elements arrive sequentially and at any point of time, the algorithm has access only to a small fraction of the data stored in primary memory. For this problem, we propose a (0.363-epsilon)-approximation algorithm, requiring only a single pass through the data; moreover, we propose a (0.4-epsilon)-approximation algorithm requiring a constant number of passes through the data. The required memory space of both algorithms depends only on the size of the knapsack capacity and epsilon