9,963 research outputs found
Heterogeneous Facility Location without Money
The study of the facility location problem in the presence of self-interested agents has recently emerged as the benchmark problem in the research on mechanism design without money. In the setting studied in the literature so far, agents are single-parameter in that their type is a single number encoding their position on a real line. We here initiate a more realistic model for several real-life scenarios. Specifically, we propose and analyze heterogeneous facility location without money, a novel model wherein: (i) we have multiple heterogeneous (i.e., serving different purposes) facilities, (ii) agents' locations are disclosed to the mechanism and (iii) agents bid for the set of facilities they are interested in (as opposed to bidding for their position on the network).
We study the heterogeneous facility location problem under two different objective functions, namely: social cost (i.e., sum of all agents' costs) and maximum cost. For either objective function, we study the approximation ratio of both deterministic and randomized truthful algorithms under the simplifying assumption that the underlying network topology is a line. For the social cost objective function, we devise an (n-1)-approximate deterministic truthful mechanism and prove a constant approximation lower bound. Furthermore, we devise an optimal and truthful (in expectation) randomized algorithm. As regards the maximum cost objective function, we propose a 3-approximate deterministic strategyproof algorithm, and prove a 3/2 approximation lower bound for deterministic strategyproof mechanisms. Furthermore, we propose a 3/2-approximate randomized strategyproof algorithm and prove a 4/3 approximation lower bound for randomized strategyproof algorithms
Truthful Facility Assignment with Resource Augmentation: An Exact Analysis of Serial Dictatorship
We study the truthful facility assignment problem, where a set of agents with
private most-preferred points on a metric space are assigned to facilities that
lie on the metric space, under capacity constraints on the facilities. The goal
is to produce such an assignment that minimizes the social cost, i.e., the
total distance between the most-preferred points of the agents and their
corresponding facilities in the assignment, under the constraint of
truthfulness, which ensures that agents do not misreport their most-preferred
points.
We propose a resource augmentation framework, where a truthful mechanism is
evaluated by its worst-case performance on an instance with enhanced facility
capacities against the optimal mechanism on the same instance with the original
capacities. We study a very well-known mechanism, Serial Dictatorship, and
provide an exact analysis of its performance. Although Serial Dictatorship is a
purely combinatorial mechanism, our analysis uses linear programming; a linear
program expresses its greedy nature as well as the structure of the input, and
finds the input instance that enforces the mechanism have its worst-case
performance. Bounding the objective of the linear program using duality
arguments allows us to compute tight bounds on the approximation ratio. Among
other results, we prove that Serial Dictatorship has approximation ratio
when the capacities are multiplied by any integer . Our
results suggest that even a limited augmentation of the resources can have
wondrous effects on the performance of the mechanism and in particular, the
approximation ratio goes to 1 as the augmentation factor becomes large. We
complement our results with bounds on the approximation ratio of Random Serial
Dictatorship, the randomized version of Serial Dictatorship, when there is no
resource augmentation
Partial Verification as a Substitute for Money
Recent work shows that we can use partial verification instead of money to
implement truthful mechanisms. In this paper we develop tools to answer the
following question. Given an allocation rule that can be made truthful with
payments, what is the minimal verification needed to make it truthful without
them? Our techniques leverage the geometric relationship between the type space
and the set of possible allocations.Comment: Extended Version of 'Partial Verification as a Substitute for Money',
AAAI 201
Facility location with double-peaked preference
We study the problem of locating a single facility on a real line based on
the reports of self-interested agents, when agents have double-peaked
preferences, with the peaks being on opposite sides of their locations. We
observe that double-peaked preferences capture real-life scenarios and thus
complement the well-studied notion of single-peaked preferences. We mainly
focus on the case where peaks are equidistant from the agents' locations and
discuss how our results extend to more general settings. We show that most of
the results for single-peaked preferences do not directly apply to this
setting; this makes the problem essentially more challenging. As our main
contribution, we present a simple truthful-in-expectation mechanism that
achieves an approximation ratio of 1+b/c for both the social and the maximum
cost, where b is the distance of the agent from the peak and c is the minimum
cost of an agent. For the latter case, we provide a 3/2 lower bound on the
approximation ratio of any truthful-in-expectation mechanism. We also study
deterministic mechanisms under some natural conditions, proving lower bounds
and approximation guarantees. We prove that among a large class of reasonable
mechanisms, there is no deterministic mechanism that outperforms our
truthful-in-expectation mechanism
What to Verify for Optimal Truthful Mechanisms without Money
We aim at identifying a minimal set of conditions under which algorithms with good approximation guarantees are truthful without money. In line with recent literature, we wish to express such a set via verification assumptions, i.e., kind of agents' misbehavior that can be made impossible by the designer.
We initiate this research endeavour for the paradigmatic problem in approximate mechanism design without money, facility location. It is known how truthfulness imposes (even severe) losses and how certain notions of verification are unhelpful in this setting; one is thus left powerless to solve this problem satisfactorily in presence of selfish agents. We here address this issue and characterize the minimal set of verification assumptions needed for the truthfulness of optimal algorithms, for both social cost and max cost objective functions. En route, we give a host of novel conceptual and technical contributions ranging from topological notions of verification to a lower bounding technique for truthful mechanisms that connects methods to test truthfulness (i.e., cycle monotonicity) with approximation guarantee
- …