132 research outputs found
Maximum flow is approximable by deterministic constant-time algorithm in sparse networks
We show a deterministic constant-time parallel algorithm for finding an
almost maximum flow in multisource-multitarget networks with bounded degrees
and bounded edge capacities. As a consequence, we show that the value of the
maximum flow over the number of nodes is a testable parameter on these
networks.Comment: 8 page
Maximum flow is approximable by deterministic constant-time algorithm in sparse networks
We show a deterministic constant-time parallel algorithm for finding an
almost maximum flow in multisource-multitarget networks with bounded
degrees and bounded edge capacities. As a consequence, we show that
the value of the maximum flow over the number of nodes is a testable
parameter on these networks
Random local algorithms
Consider the problem when we want to construct some structure on a bounded
degree graph, e.g. an almost maximum matching, and we want to decide about each
edge depending only on its constant radius neighbourhood. We show that the
information about the local statistics of the graph does not help here. Namely,
if there exists a random local algorithm which can use any local statistics
about the graph, and produces an almost optimal structure, then the same can be
achieved by a random local algorithm using no statistics.Comment: 9 page
Computational aspects of combinatorial pricing problems
Combinatorial pricing encompasses a wide range of natural optimization problems that arise in the computation of revenue maximizing pricing schemes for a given set of goods, as well as the design of profit maximizing auctions in strategic settings. We consider the computational side of several different multi-product and network pricing problems and, as most of the problems in this area are NP-hard, we focus on the design of approximation algorithms and corresponding inapproximability results. In the unit-demand multi-product pricing problem it is assumed that each consumer has different budgets for the products she is interested in and purchases a single product out of her set of alternatives. Depending on how consumers choose their products once prices are fixed we distinguish the min-buying, max-buying and rank-buying models, in which consumers select the affordable product with smallest price, highest price or highest rank according to some predefined preference list, respectively. We prove that the max-buying model allows for constant approximation guarantees and this is true even in the case of limited product supply. For the min-buying model we prove inapproximability beyond the known logarithmic guarantees under standard complexity theoretic assumptions. Surprisingly, this result even extends to the case of pricing with a price ladder constraint, i.e., a predefined relative order on the product prices. Furthermore, similar results can be shown for the uniform-budget version of the problem, which corresponds to a special case of the unit-demand envy-free pricing problem, under an assumption about the average case hardness of refuting random 3SAT-instances. Introducing the notion of stochastic selection rules we show that among a large class of selection rules based on the order of product prices the maxbuying model is in fact the only one allowing for sub-logarithmic approximation guarantees. In the single-minded pricing problem each consumer is interested in a single set of products, which she purchases if the sum of prices does not exceed her budget. It turns out that our results on envyfree unit-demand pricing can be extended to this scenario and yield inapproximability results for ratios expressed in terms of the number of distinct products, thereby complementing existing hardness results. On the algorithmic side, we present an algorithm with approximation guarantee that depends only on the maximum size of the sets and the number of requests per product. Our algorithm’s ratio matches previously known results in the worst case but has significantly better provable performance guarantees on sparse problem instances. Viewing single-minded as a network pricing problem in which we assign prices to edges and consumers want to purchase paths in the network, it is proven that the problem remains APX-hard even on extremely sparse instances. For the special case of pricing on a line with paths that are nested, we design an FPTAS and prove NP-hardness. In a Stackelberg network pricing game a so-called leader sets the prices on a subset of the edges of a network, the remaining edges have associated fixed costs. Once prices are fixed, one or more followers purchase min-cost subnetworks according to their requirements and pay the leader for all pricable edges contained in their networks. We extend the analysis of the known single-price algorithm, which assigns the same price to all pricable edges, from cases in which the feasible subnetworks of a follower form the basis of a matroid to the general case, thus, obtaining logarithmic approximation guarantees for general Stackelberg games. We then consider a special 2-player game in which the follower buys a min-cost vertex cover in a bipartite graph and the leader sets prices on a subset of the vertices. We prove that this problem is polynomial time solvable in some cases and allows for constant approximation guarantees in general. Finally, we point out that results on unit-demand and single-minded pricing yield several strong inapproximability results for Stackelberg pricing games with multiple followers
Tight Localizations of Feedback Sets
The classical NP-hard feedback arc set problem (FASP) and feedback vertex set
problem (FVSP) ask for a minimum set of arcs or
vertices whose removal , makes a given multi-digraph acyclic, respectively. Though both
problems are known to be APX-hard, approximation algorithms or proofs of
inapproximability are unknown. We propose a new
-heuristic for the directed FASP. While a ratio of is known to be a lower bound for the APX-hardness, at least by
empirical validation we achieve an approximation of . The most
relevant applications, such as circuit testing, ask for solving the FASP on
large sparse graphs, which can be done efficiently within tight error bounds
due to our approach.Comment: manuscript submitted to AC
08201 Abstracts Collection -- Design and Analysis of Randomized and Approximation Algorithms
From 11.05.08 to 16.05.08, the Dagstuhl Seminar 08201
``Design and Analysis of Randomized and Approximation Algorithms\u27\u27
was held in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research work, and ongoing work and open problems were discussed.
Abstracts of the presentations which were given during the seminar as well as
abstracts of seminar results and ideas are put together in this paper.
The first section describes the seminar topics and goals in general.
Links to extended abstracts or full paper are provided, if available
Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses
We present a series of almost settled inapproximability results for three
fundamental problems. The first in our series is the subexponential-time
inapproximability of the maximum independent set problem, a question studied in
the area of parameterized complexity. The second is the hardness of
approximating the maximum induced matching problem on bounded-degree bipartite
graphs. The last in our series is the tight hardness of approximating the
k-hypergraph pricing problem, a fundamental problem arising from the area of
algorithmic game theory. In particular, assuming the Exponential Time
Hypothesis, our two main results are:
- For any r larger than some constant, any r-approximation algorithm for the
maximum independent set problem must run in at least
2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of
2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the
domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et
al., 2013)
- For any k larger than some constant, there is no polynomial time min
(k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph
pricing problem, where n is the number of vertices in an input graph. This
almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and
Blum, 2007 and an algorithm in this paper).
We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness
for polynomial-time algorithms, the k-hypergraph pricing problem admits
n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts
this problem in a rare approximability class in which approximability
thresholds can be improved significantly by allowing algorithms to run in
quasi-polynomial time.Comment: The full version of FOCS 201
Correlation Decay in Random Decision Networks
We consider a decision network on an undirected graph in which each node
corresponds to a decision variable, and each node and edge of the graph is
associated with a reward function whose value depends only on the variables of
the corresponding nodes. The goal is to construct a decision vector which
maximizes the total reward. This decision problem encompasses a variety of
models, including maximum-likelihood inference in graphical models (Markov
Random Fields), combinatorial optimization on graphs, economic team theory and
statistical physics. The network is endowed with a probabilistic structure in
which costs are sampled from a distribution. Our aim is to identify sufficient
conditions to guarantee average-case polynomiality of the underlying
optimization problem. We construct a new decentralized algorithm called Cavity
Expansion and establish its theoretical performance for a variety of models.
Specifically, for certain classes of models we prove that our algorithm is able
to find near optimal solutions with high probability in a decentralized way.
The success of the algorithm is based on the network exhibiting a correlation
decay (long-range independence) property. Our results have the following
surprising implications in the area of average case complexity of algorithms.
Finding the largest independent (stable) set of a graph is a well known NP-hard
optimization problem for which no polynomial time approximation scheme is
possible even for graphs with largest connectivity equal to three, unless P=NP.
We show that the closely related maximum weighted independent set problem for
the same class of graphs admits a PTAS when the weights are i.i.d. with the
exponential distribution. Namely, randomization of the reward function turns an
NP-hard problem into a tractable one
- …