35 research outputs found
Expressiveness and Robustness of First-Price Position Auctions
Since economic mechanisms are often applied to very different instances of
the same problem, it is desirable to identify mechanisms that work well in a
wide range of circumstances. We pursue this goal for a position auction setting
and specifically seek mechanisms that guarantee good outcomes under both
complete and incomplete information. A variant of the generalized first-price
mechanism with multi-dimensional bids turns out to be the only standard
mechanism able to achieve this goal, even when types are one-dimensional. The
fact that expressiveness beyond the type space is both necessary and sufficient
for this kind of robustness provides an interesting counterpoint to previous
work on position auctions that has highlighted the benefits of simplicity. From
a technical perspective our results are interesting because they establish
equilibrium existence for a multi-dimensional bid space, where standard
techniques break down. The structure of the equilibrium bids moreover provides
an intuitive explanation for why first-price payments may be able to support
equilibria in a wider range of circumstances than second-price payments
Efficiency Guarantees in Auctions with Budgets
In settings where players have a limited access to liquidity, represented in
the form of budget constraints, efficiency maximization has proven to be a
challenging goal. In particular, the social welfare cannot be approximated by a
better factor then the number of players. Therefore, the literature has mainly
resorted to Pareto-efficiency as a way to achieve efficiency in such settings.
While successful in some important scenarios, in many settings it is known that
either exactly one incentive-compatible auction that always outputs a
Pareto-efficient solution, or that no truthful mechanism can always guarantee a
Pareto-efficient outcome. Traditionally, impossibility results can be avoided
by considering approximations. However, Pareto-efficiency is a binary property
(is either satisfied or not), which does not allow for approximations.
In this paper we propose a new notion of efficiency, called \emph{liquid
welfare}. This is the maximum amount of revenue an omniscient seller would be
able to extract from a certain instance. We explain the intuition behind this
objective function and show that it can be 2-approximated by two different
auctions. Moreover, we show that no truthful algorithm can guarantee an
approximation factor better than 4/3 with respect to the liquid welfare, and
provide a truthful auction that attains this bound in a special case.
Importantly, the liquid welfare benchmark also overcomes impossibilities for
some settings. While it is impossible to design Pareto-efficient auctions for
multi-unit auctions where players have decreasing marginal values, we give a
deterministic -approximation for the liquid welfare in this setting
Unknown I.I.D. Prophets: Better Bounds, Streaming Algorithms, and a New Impossibility
A prophet inequality states, for some α ∈ [0, 1], that the expected value achievable by a gambler who
sequentially observes random variables X1, . . . , Xn and selects one of them is at least an α fraction
of the maximum value in the sequence. We obtain three distinct improvements for a setting that
was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in
algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and
the gambler has access to an additional βn samples for some β ≥ 0. We first give improved lower
bounds on α for a wide range of values of β; specifically, α ≥ (1 + β)/e when β ≤ 1/(e − 1), which is
tight, and α ≥ 0.648 when β = 1, which improves on a bound of around 0.635 due to Correa et al.
(SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing,
we then show that the new bounds can be obtained even in a streaming model of computation
and thus in situations where the use of relevant data is complicated by the sheer amount of data
available. We finally establish that the upper bound of 1/e for the case without samples is robust
to additional information about the distribution, and applies also to sequences of i.i.d. random
variables whose distribution is itself drawn, according to a known distribution, from a finite set of
known candidate distributions. This implies a tight prophet inequality for exchangeable sequences
of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992),
but leaves open the possibility of better guarantees when the number of candidate distributions is
small, a setting we believe is of strong interest to applications
Group Strategyproof Pareto-Stable Marriage with Indifferences via the Generalized Assignment Game
We study the variant of the stable marriage problem in which the preferences
of the agents are allowed to include indifferences. We present a mechanism for
producing Pareto-stable matchings in stable marriage markets with indifferences
that is group strategyproof for one side of the market. Our key technique
involves modeling the stable marriage market as a generalized assignment game.
We also show that our mechanism can be implemented efficiently. These results
can be extended to the college admissions problem with indifferences
Recommended from our members
Regulation of early steps of GPVI signal transduction by phosphatases: a systems biology approach
We present a data-driven mathematical model of a key initiating step in platelet activation, a central process in the prevention of bleeding following Injury. In vascular disease, this process is activated inappropriately and causes thrombosis, heart attacks and stroke. The collagen receptor GPVI is the primary trigger for platelet activation at sites of injury. Understanding the complex molecular mechanisms initiated by this receptor is important for development of more effective antithrombotic medicines. In this work we developed a series of nonlinear ordinary differential equation models that are direct representations of biological hypotheses surrounding the initial steps in GPVI-stimulated signal transduction. At each stage model simulations were compared to our own quantitative, high-temporal experimental data that guides further experimental design, data collection and model refinement. Much is known about the linear forward reactions within platelet signalling pathways but knowledge of the roles of putative reverse reactions are poorly understood. An initial model, that includes a simple constitutively active phosphatase, was unable to explain experimental data. Model revisions, incorporating a complex pathway of interactions (and specifically the phosphatase TULA-2), provided a good description of the experimental data both based on observations of phosphorylation in samples from one donor and in those of a wider population. Our model was used to investigate the levels of proteins involved in regulating the pathway and the effect of low GPVI levels that have been associated with disease. Results indicate a clear separation in healthy and GPVI deficient states in respect of the signalling cascade dynamics associated with Syk tyrosine phosphorylation and activation. Our approach reveals the central importance of this negative feedback pathway that results in the temporal regulation of a specific class of protein tyrosine phosphatases in controlling the rate, and therefore extent, of GPVI-stimulated platelet activation
Differential Expression of miRNAs in Response to Topping in Flue-Cured Tobacco (Nicotiana tabacum) Roots
Topping is an important cultivating measure for flue-cured tobacco, and many genes had been found to be differentially expressed in response to topping. But it is still unclear how these genes are regulated. MiRNAs play a critical role in post-transcriptional gene regulation, so we sequenced two sRNA libraries from tobacco roots before and after topping, with a view to exploring transcriptional differences in miRNAs.Two sRNA libraries were generated from tobacco roots before and after topping. Solexa high-throughput sequencing of tobacco small RNAs revealed a total of 12,104,207 and 11,292,018 reads representing 3,633,398 and 3,084,102 distinct sequences before and after topping. The expressions of 136 conserved miRNAs (belonging to 32 families) and 126 new miRNAs (belonging to 77 families) were determined. There were three major conserved miRNAs families (nta-miR156, nta-miR172 and nta-miR171) and two major new miRNAs families (nta-miRn2 and nta-miRn26). All of these identified miRNAs can be folded into characteristic miRNA stem-loop secondary hairpin structures, and qRT-PCR was adopted to validate and measure the expression of miRNAs. Putative targets were identified for 133 out of 136 conserved miRNAs and 126 new miRNAs. Of these miRNAs whose targets had been identified, the miRNAs which change markedly (>2 folds) belong to 53 families and their targets have different biological functions including development, response to stress, response to hormone, N metabolism, C metabolism, signal transduction, nucleic acid metabolism and other metabolism. Some interesting targets for miRNAs had been determined.The differential expression profiles of miRNAs were shown in flue-cured tobacco roots before and after topping, which can be expected to regulate transcripts distinctly involved in response to topping. Further identification of these differentially expressed miRNAs and their targets would allow better understanding of the regulatory mechanisms for flue-cured tobacco response to topping
Prophet Inequalities for I.I.D. Random Variables from an Unknown Distribution
A central object in optimal stopping theory is the single-choice prophet
inequality for independent, identically distributed random variables: Given a
sequence of random variables drawn independently from a
distribution , the goal is to choose a stopping time so as to
maximize such that for all distributions we have
. What makes this
problem challenging is that the decision whether may only depend on
the values of the random variables and on the distribution .
For quite some time the best known bound for the problem was
[Hill and Kertz, 1982]. Only recently this bound
was improved by Abolhassani et al. [2017], and a tight bound of
was obtained by Correa et al. [2017].
The case where is unknown, such that the decision whether may
depend only on the values of the first random variables but not on , is
equally well motivated (e.g., [Azar et al., 2014]) but has received much less
attention. A straightforward guarantee for this case of
can be derived from the solution to the secretary
problem. Our main result is that this bound is tight. Motivated by this
impossibility result we investigate the case where the stopping time may
additionally depend on a limited number of samples from~. An extension of
our main result shows that even with samples , so that
the interesting case is the one with samples. Here we show that
samples allow for a significant improvement over the secretary problem, while
samples are equivalent to knowledge of the distribution: specifically,
with samples and
, and with samples
for any