525 research outputs found
Fast Iterative Combinatorial Auctions via Bayesian Learning
Iterative combinatorial auctions (CAs) are often used in multi-billion dollar
domains like spectrum auctions, and speed of convergence is one of the crucial
factors behind the choice of a specific design for practical applications. To
achieve fast convergence, current CAs require careful tuning of the price
update rule to balance convergence speed and allocative efficiency. Brero and
Lahaie (2018) recently introduced a Bayesian iterative auction design for
settings with single-minded bidders. The Bayesian approach allowed them to
incorporate prior knowledge into the price update algorithm, reducing the
number of rounds to convergence with minimal parameter tuning. In this paper,
we generalize their work to settings with no restrictions on bidder valuations.
We introduce a new Bayesian CA design for this general setting which uses Monte
Carlo Expectation Maximization to update prices at each round of the auction.
We evaluate our approach via simulations on CATS instances. Our results show
that our Bayesian CA outperforms even a highly optimized benchmark in terms of
clearing percentage and convergence speed.Comment: 9 pages, 2 figures, AAAI-1
On the Economic Efficiency of the Combinatorial Clock Auction
Since the 1990s spectrum auctions have been implemented world-wide. This has
provided for a practical examination of an assortment of auction mechanisms
and, amongst these, two simultaneous ascending price auctions have proved to be
extremely successful. These are the simultaneous multiround ascending auction
(SMRA) and the combinatorial clock auction (CCA). It has long been known that,
for certain classes of valuation functions, the SMRA provides good theoretical
guarantees on social welfare. However, no such guarantees were known for the
CCA.
In this paper, we show that CCA does provide strong guarantees on social
welfare provided the price increment and stopping rule are well-chosen. This is
very surprising in that the choice of price increment has been used primarily
to adjust auction duration and the stopping rule has attracted little
attention. The main result is a polylogarithmic approximation guarantee for
social welfare when the maximum number of items demanded by a
bidder is fixed. Specifically, we show that either the revenue of the CCA is at
least an -fraction of
the optimal welfare or the welfare of the CCA is at least an
-fraction of the optimal welfare, where
is the number of bidders and is the number of items. As a corollary, the
welfare ratio -- the worst case ratio between the social welfare of the optimum
allocation and the social welfare of the CCA allocation -- is at most
. We emphasize that this latter
result requires no assumption on bidders valuation functions. Finally, we prove
that such a dependence on is necessary. In particular, we show
that the welfare ratio of the CCA is at least
Enabling Privacy-preserving Auctions in Big Data
We study how to enable auctions in the big data context to solve many
upcoming data-based decision problems in the near future. We consider the
characteristics of the big data including, but not limited to, velocity,
volume, variety, and veracity, and we believe any auction mechanism design in
the future should take the following factors into consideration: 1) generality
(variety); 2) efficiency and scalability (velocity and volume); 3) truthfulness
and verifiability (veracity). In this paper, we propose a privacy-preserving
construction for auction mechanism design in the big data, which prevents
adversaries from learning unnecessary information except those implied in the
valid output of the auction. More specifically, we considered one of the most
general form of the auction (to deal with the variety), and greatly improved
the the efficiency and scalability by approximating the NP-hard problems and
avoiding the design based on garbled circuits (to deal with velocity and
volume), and finally prevented stakeholders from lying to each other for their
own benefit (to deal with the veracity). We achieve these by introducing a
novel privacy-preserving winner determination algorithm and a novel payment
mechanism. Additionally, we further employ a blind signature scheme as a
building block to let bidders verify the authenticity of their payment reported
by the auctioneer. The comparison with peer work shows that we improve the
asymptotic performance of peer works' overhead from the exponential growth to a
linear growth and from linear growth to a logarithmic growth, which greatly
improves the scalability
On the Complexity of Computing an Equilibrium in Combinatorial Auctions
We study combinatorial auctions where each item is sold separately but
simultaneously via a second price auction. We ask whether it is possible to
efficiently compute in this game a pure Nash equilibrium with social welfare
close to the optimal one.
We show that when the valuations of the bidders are submodular, in many
interesting settings (e.g., constant number of bidders, budget additive
bidders) computing an equilibrium with good welfare is essentially as easy as
computing, completely ignoring incentives issues, an allocation with good
welfare. On the other hand, for subadditive valuations, we show that computing
an equilibrium requires exponential communication. Finally, for XOS (a.k.a.
fractionally subadditive) valuations, we show that if there exists an efficient
algorithm that finds an equilibrium, it must use techniques that are very
different from our current ones
Modelling Combinatorial Auctions in Linear Logic
We show that linear logic can serve as an expressive framework
in which to model a rich variety of combinatorial auction
mechanisms. Due to its resource-sensitive nature, linear
logic can easily represent bids in combinatorial auctions in
which goods may be sold in multiple units, and we show
how it naturally generalises several bidding languages familiar
from the literature. Moreover, the winner determination
problem, i.e., the problem of computing an allocation of
goods to bidders producing a certain amount of revenue for
the auctioneer, can be modelled as the problem of finding a
proof for a particular linear logic sequent
Inapproximability of Truthful Mechanisms via Generalizations of the VC Dimension
Algorithmic mechanism design (AMD) studies the delicate interplay between
computational efficiency, truthfulness, and optimality. We focus on AMD's
paradigmatic problem: combinatorial auctions. We present a new generalization
of the VC dimension to multivalued collections of functions, which encompasses
the classical VC dimension, Natarajan dimension, and Steele dimension. We
present a corresponding generalization of the Sauer-Shelah Lemma and harness
this VC machinery to establish inapproximability results for deterministic
truthful mechanisms. Our results essentially unify all inapproximability
results for deterministic truthful mechanisms for combinatorial auctions to
date and establish new separation gaps between truthful and non-truthful
algorithms
Combinatorial Auctions Do Need Modest Interaction
We study the necessity of interaction for obtaining efficient allocations in
subadditive combinatorial auctions. This problem was originally introduced by
Dobzinski, Nisan, and Oren (STOC'14) as the following simple market scenario:
items are to be allocated among bidders in a distributed setting where
bidders valuations are private and hence communication is needed to obtain an
efficient allocation. The communication happens in rounds: in each round, each
bidder, simultaneously with others, broadcasts a message to all parties
involved and the central planner computes an allocation solely based on the
communicated messages. Dobzinski et.al. showed that no non-interactive
(-round) protocol with polynomial communication (in the number of items and
bidders) can achieve approximation ratio better than ,
while for any , there exists -round protocols that achieve
approximation with polynomial
communication; in particular, rounds of interaction suffice to
obtain an (almost) efficient allocation.
A natural question at this point is to identify the "right" level of
interaction (i.e., number of rounds) necessary to obtain an efficient
allocation. In this paper, we resolve this question by providing an almost
tight round-approximation tradeoff for this problem: we show that for any , any -round protocol that uses polynomial communication can only
approximate the social welfare up to a factor of . This in particular implies that
rounds of interaction are necessary for
obtaining any efficient allocation in these markets. Our work builds on the
recent multi-party round-elimination technique of Alon, Nisan, Raz, and
Weinstein (FOCS'15) and settles an open question posed by Dobzinski et.al. and
Alon et. al
Coordination of Purchasing and Bidding Activities Across Markets
In both consumer purchasing and industrial procurement, combinatorial interdependencies among the items to be purchased are commonplace. E-commerce compounds the problem by providing more opportunities for switching suppliers at low costs, but also potentially eases the problem by enabling automated market decision-making systems, commonly referred to as trading agents, to make purchasing decisions in an integrated manner across markets. Most of the existing research related to trading agents assumes that there exists a combinatorial market mechanism in which buyers (or sellers) can bid (or sell) service or merchant bundles. Todayâ??s prevailing e-commerce practice, however, does not support this assumption in general and thus limits the practical applicability of these approaches. We are investigating a new approach to deal with the combinatorial interdependency challenges for online markets. This approach relies on existing commercial online market institutions such as posted-price markets and various online auctions that sell single items. It uses trading agents to coordinate a buyerâ??s purchasing and bidding activities across multiple online markets simultaneously to achieve the best overall procurement effectiveness. This paper presents two sets of models related to this approach. The first set of models formalizes optimal purchasing decisions across posted-price markets with fixed transaction costs. Flat shipping costs, a common e-tailing practice, are captured in these models. We observe that making optimal purchasing decisions in this context is NP-hard in the strong sense and suggest several efficient computational methods based on discrete location theory. The second set of models is concerned with the coordination of bidding activities across multiple online auctions. We study the underlying coordination problem for a collection of first or second-price sealed-bid auctions and derive the optimal coordination and bidding policies.
- …