38 research outputs found
Approximation Algorithms for the Max-Buying Problem with Limited Supply
We consider the Max-Buying Problem with Limited Supply, in which there are
items, with copies of each item , and bidders such that every
bidder has valuation for item . The goal is to find a pricing
and an allocation of items to bidders that maximizes the profit, where
every item is allocated to at most bidders, every bidder receives at most
one item and if a bidder receives item then . Briest
and Krysta presented a 2-approximation for this problem and Aggarwal et al.
presented a 4-approximation for the Price Ladder variant where the pricing must
be non-increasing (that is, ). We present an
-approximation for the Max-Buying Problem with Limited Supply and, for
every , a -approximation for the Price Ladder
variant
Hardness of Graph Pricing through Generalized Max-Dicut
The Graph Pricing problem is among the fundamental problems whose
approximability is not well-understood. While there is a simple combinatorial
1/4-approximation algorithm, the best hardness result remains at 1/2 assuming
the Unique Games Conjecture (UGC). We show that it is NP-hard to approximate
within a factor better than 1/4 under the UGC, so that the simple combinatorial
algorithm might be the best possible. We also prove that for any , there exists such that the integrality gap of
-rounds of the Sherali-Adams hierarchy of linear programming for
Graph Pricing is at most 1/2 + .
This work is based on the effort to view the Graph Pricing problem as a
Constraint Satisfaction Problem (CSP) simpler than the standard and complicated
formulation. We propose the problem called Generalized Max-Dicut(), which
has a domain size for every . Generalized Max-Dicut(1) is
well-known Max-Dicut. There is an approximation-preserving reduction from
Generalized Max-Dicut on directed acyclic graphs (DAGs) to Graph Pricing, and
both our results are achieved through this reduction. Besides its connection to
Graph Pricing, the hardness of Generalized Max-Dicut is interesting in its own
right since in most arity two CSPs studied in the literature, SDP-based
algorithms perform better than LP-based or combinatorial algorithms --- for
this arity two CSP, a simple combinatorial algorithm does the best.Comment: 28 page
The Landscape of Bounds for Binary Search Trees
Binary search trees (BSTs) with rotations can adapt to various kinds of structure in search sequences, achieving amortized access times substantially better than the Theta(log n) worst-case guarantee. Classical examples of structural properties include static optimality, sequential access, working set, key-independent optimality, and dynamic finger, all of which are now known to be achieved by the two famous online BST algorithms (Splay and Greedy). (...) In this paper, we introduce novel properties that explain the efficiency of sequences not captured by any of the previously known properties, and which provide new barriers to the dynamic optimality conjecture. We also establish connections between various properties, old and new. For instance, we show the following. (i) A tight bound of O(n log d) on the cost of Greedy for d-decomposable sequences. The result builds on the recent lazy finger result of Iacono and Langerman (SODA 2016). On the other hand, we show that lazy finger alone cannot explain the efficiency of pattern avoiding sequences even in some of the simplest cases. (ii) A hierarchy of bounds using multiple lazy fingers, addressing a recent question of Iacono and Langerman. (iii) The optimality of the Move-to-root heuristic in the key-independent setting introduced by Iacono (Algorithmica 2005). (iv) A new tool that allows combining any finite number of sound structural properties. As an application, we show an upper bound on the cost of a class of sequences that all known properties fail to capture. (v) The equivalence between two families of BST properties. The observation on which this connection is based was known before - we make it explicit, and apply it to classical BST properties. (...
New Tools and Connections for Exponential-Time Approximation
In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of
1.
r for maximum independent set in O∗(exp(O~(n/rlog2r+rlog2r)))
time,
2.
r for chromatic number in O∗(exp(O~(n/rlogr+rlog2r)))
time,
3.
(2−1/r)
for minimum vertex cover in O∗(exp(n/rΩ(r)))
time, and
4.
(k−1/r)
for minimum k-hypergraph vertex cover in O∗(exp(n/(kr)Ω(kr)))
time.
(Throughout, O~
and O∗ omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were O∗(2n/r) (Bourgeois et al. i
New Tools and Connections for Exponential-Time Approximation
In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of
1.
r for maximum independent set in O∗(exp(O~(n/rlog2r+rlog2r)))
time,
2.
r for chromatic number in O∗(exp(O~(n/rlogr+rlog2r)))
time,
3.
(2−1/r)
for minimum vertex cover in O∗(exp(n/rΩ(r)))
time, and
4.
(k−1/r)
for minimum k-hypergraph vertex cover in O∗(exp(n/(kr)Ω(kr)))
time.
(Throughout, O~
and O∗ omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were O∗(2n/r) (Bourgeois et al. in Discret Appl Math 159(17):1954–1970, 2011; Cygan et al. in Exponential-time approximation of hard problems, 2008). For maximum independent set and chromatic number, these bounds were complemented by exp(n1−o(1)/r1+o(1)) lower bounds (under the Exponential Time Hypothesis (ETH)) (Chalermsook et al. in Foundations of computer science, FOCS, pp. 370–379, 2013; Laekhanukit in Inapproximability of combinatorial problems in subexponential-time. Ph.D. thesis, 2014). Our results show that the naturally-looking O∗(2n/r) bounds are not tight for all these problems. The key to these results is a sparsification procedure that reduces a problem to a bounded-degree variant, allowing the use of approximation algorithms for bounded-degree graphs. To obtain the first two results, we introduce a new randomized branching rule. Finally, we show a connection between PCP parameters and exponential-time approximation algorithms. This connection together with our independent set algorithm refute the possibility to overly reduce the size of Chan’s PCP (Chan in J. ACM 63(3):27:1–27:32, 2016). It also implies that a (significant) improvement over our result will refute the gap-ETH conjecture (Dinur in Electron Colloq Comput Complex (ECCC) 23:128, 2016; Manurangsi and Raghavendra in A birthday repetition theorem and complexity of approximating dense CSPs, 2016)
Assortment optimisation under a general discrete choice model: A tight analysis of revenue-ordered assortments
The assortment problem in revenue management is the problem of deciding which
subset of products to offer to consumers in order to maximise revenue. A simple
and natural strategy is to select the best assortment out of all those that are
constructed by fixing a threshold revenue and then choosing all products
with revenue at least . This is known as the revenue-ordered assortments
strategy. In this paper we study the approximation guarantees provided by
revenue-ordered assortments when customers are rational in the following sense:
the probability of selecting a specific product from the set being offered
cannot increase if the set is enlarged. This rationality assumption, known as
regularity, is satisfied by almost all discrete choice models considered in the
revenue management and choice theory literature, and in particular by random
utility models. The bounds we obtain are tight and improve on recent results in
that direction, such as for the Mixed Multinomial Logit model by
Rusmevichientong et al. (2014). An appealing feature of our analysis is its
simplicity, as it relies only on the regularity condition.
We also draw a connection between assortment optimisation and two pricing
problems called unit demand envy-free pricing and Stackelberg minimum spanning
tree: These problems can be restated as assortment problems under discrete
choice models satisfying the regularity condition, and moreover revenue-ordered
assortments correspond then to the well-studied uniform pricing heuristic. When
specialised to that setting, the general bounds we establish for
revenue-ordered assortments match and unify the best known results on uniform
pricing.Comment: Minor changes following referees' comment
Clustering With Center Constraints
In the classical maximum independent set problem, we are given a graph G of "conflicts" and are asked to find a maximum conflict-free subset. If we think of the remaining nodes as being "assigned" (at unit cost each) to one of these independent vertices and ask for an assignment of minimum cost, this yields the vertex cover problem.
In this paper, we consider a more general scenario where the assignment costs might be given by a distance metric d (which can be unrelated to G) on the underlying set of vertices.
This problem, in addition to being a natural generalization of vertex cover and an interesting variant of the k-median problem, also has connection to constrained clustering and database repair.
Understanding the relation between the conflict structure (the graph) and the distance structure (the metric) for this problem turns out to be the key to isolating its complexity. We show that when the two structures are unrelated, the problem inherits a trivial upper bound from vertex cover and provide an almost matching lower bound on hardness of approximation. We then prove a number of lower and upper bounds that depend on the relationship between the two structures, including polynomial time algorithms for special graphs