15 research outputs found
Learning and Robustness With Applications To Mechanism Design
The design of economic mechanisms, especially auctions, is an increasingly important part of the modern economy. A particularly important property for a mechanism is strategyproofness -- the mechanism must be robust to strategic manipulations so that the participants in the mechanism have no incentive to lie. Yet in the important case when the mechanism designer's goal is to maximize their own revenue, the design of optimal strategyproof mechanisms has proved immensely difficult, with very little progress after decades of research.
Recently, to escape this impasse, a number of works have parameterized auction mechanisms as deep neural networks, and used gradient descent to successfully learn approximately optimal and approximately strategyproof mechanisms. We present several improvements on these techniques.
When an auction mechanism is represented as a neural network mapping bids from outcomes, strategyproofness can be thought of as a type of adversarial robustness. Making this connection explicit, we design a modified architecture for learning auctions which is amenable to integer-programming-based certification techniques from the adversarial robustness literature. Existing baselines are empirically strategyproof, but with no way to be certain how strong that guarantee really is. By contrast, we are able to provide perfectly tight bounds on the degree to which strategyproofness is violated at any given point.
Existing neural networks for auctions learn to maximize revenue subject to strategyproofness. Yet in many auctions, fairness is also an important concern -- in particular, fairness with respect to the items in the auction, which may represent, for instance, ad impressions for different protected demographic groups. With our new architecture, ProportionNet, we impose fairness constraints in addition to the strategyproofness constraints, and find approximately fair, approximately optimal mechanisms which outperform baselines. With PreferenceNet, we extend this approach to notions of fairness that are learned from possibly vague human preferences.
Existing network architectures can represent additive and unit-demand auctions, but are unable to imposing more complex exactly-k constraints on the allocations made to the bidders. By using the Sinkhorn algorithm to add differentiable matching constraints, we produce a network which can represent valid allocations in such settings.
Finally, we present a new auction architecture which is a differentiable version of affine maximizer auctions, modified to offer lotteries in order to potentially increase revenue. This architecture is always perfectly strategyproof (avoiding the Lagrangian-based constrained optimization of RegretNet) -- to achieve this goal, however, we need to accept that we cannot in general represent the optimal auction
Deep Learning for Two-Sided Matching
We initiate the use of a multi-layer neural network to model two-sided
matching and to explore the design space between strategy-proofness and
stability. It is well known that both properties cannot be achieved
simultaneously but the efficient frontier in this design space is not
understood. We show empirically that it is possible to achieve a good
compromise between stability and strategy-proofness-substantially better than
that achievable through a convex combination of deferred acceptance (stable and
strategy-proof for only one side of the market) and randomized serial
dictatorship (strategy-proof but not stable)
Market Design for Dynamic Pricing and Pooling in Capacitated Networks
We study a market mechanism that sets edge prices to incentivize strategic
agents to organize trips that efficiently share limited network capacity. This
market allows agents to form groups to share trips, make decisions on departure
times and route choices, and make payments to cover edge prices and other
costs. We develop a new approach to analyze the existence and computation of
market equilibrium, building on theories of combinatorial auctions and dynamic
network flows. Our approach tackles the challenges in market equilibrium
characterization arising from: (a) integer and network constraints on the
dynamic flow of trips in sharing limited edge capacity; (b) heterogeneous and
private preferences of strategic agents. We provide sufficient conditions on
the network topology and agents' preferences that ensure the existence and
polynomial-time computation of market equilibrium. We identify a particular
market equilibrium that achieves maximum utilities for all agents, and is
equivalent to the outcome of the classical Vickery Clark Grove mechanism.
Finally, we extend our results to general networks with multiple populations
and apply them to compute dynamic tolls for efficient carpooling in San
Francisco Bay Area
Data Market Design through Deep Learning
The problem is a problem in economic theory to
find a set of signaling schemes (statistical experiments) to maximize expected
revenue to the information seller, where each experiment reveals some of the
information known to a seller and has a corresponding price [Bergemann et al.,
2018]. Each buyer has their own decision to make in a world environment, and
their subjective expected value for the information associated with a
particular experiment comes from the improvement in this decision and depends
on their prior and value for different outcomes. In a setting with multiple
buyers, a buyer's expected value for an experiment may also depend on the
information sold to others [Bonatti et al., 2022]. We introduce the application
of deep learning for the design of revenue-optimal data markets, looking to
expand the frontiers of what can be understood and achieved. Relative to
earlier work on deep learning for auction design [D\"utting et al., 2023], we
must learn signaling schemes rather than allocation rules and handle
these arising from modeling the downstream
actions of buyers in addition to incentive constraints on bids. Our
experiments demonstrate that this new deep learning framework can almost
precisely replicate all known solutions from theory, expand to more complex
settings, and be used to establish the optimality of new designs for data
markets and make conjectures in regard to the structure of optimal designs
Incentive-driven QoS in peer-to-peer overlays
A well known problem in peer-to-peer overlays is that no single entity has control over the software,
hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its
benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms
for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives,
while at the same time optimising the performance of the peer-to-peer distribution overlay.
The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution
accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism
to encourage peers to contribute resources even when users are not actively consuming overlay services.
This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to
achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource
allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive
mechanism by providing efficient overlay construction, while at the same time allocating increasing
service quality to those peers that contribute more to the network. The model is then applied to lagsensitive
chunk swarming, and some of its properties are explored for different peer delay distributions.
When considering QoS overlays deployed over the best-effort Internet, the quality received by a
client cannot be adjudicated completely to either its serving peer or the intervening network between
them. By drawing parallels between this situation and well-known hidden action situations in microeconomics,
we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply
it to delay-sensitive chunk distribution overlays and present the optimal contract payments required,
along with a method for QoS contract enforcement through reciprocative strategies. We also present a
probabilistic model for application-layer delay as a function of the prevailing network conditions.
Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We
propose two novel models of multihoming managed overlay incentives in which overlays can freely
allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility
function with desired properties, while the other is designed for data-driven least-squares fitting of the
cross elasticity of demand. This last model is then used to solve for ISP profit maximisation
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum