15 research outputs found

    Fair and Efficient Online Allocations with Normalized Valuations

    Full text link
    A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness and efficiency. Achieving any non-trivial guarantees in an adversarial setting is impossible. However, we show that normalizing the agent values, a very common assumption in fair division, allows us to escape this impossibility. Our main result is an online algorithm for the case of two agents that ensures the outcome is envy-free while guaranteeing 91.6% of the optimal social welfare. We also show that this is near-optimal: there is no envy-free algorithm that guarantees more than 93.3% of the optimal social welfare

    Strategyproof Scheduling with Predictions

    Get PDF
    In their seminal paper that initiated the field of algorithmic mechanism design, Nisan and Ronen [Noam Nisan and Amir Ronen, 1999] studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an n-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, n remains the best known approximation and very recent work by Christodoulou et al. [George Christodoulou et al., 2021] has been able to prove an ?(?n) approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the "learning-augmented framework", whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of Nisan and Ronen [Noam Nisan and Amir Ronen, 1999] using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is 6-consistent and 2n-robust. We thus achieve the "best of both worlds": an O(1) consistency and an O(n) robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any 1-consistent deterministic strategyproof mechanism has unbounded robustness

    Getting More by Knowing Less: Bayesian Incentive Compatible Mechanisms for Fair Division

    Full text link
    We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, there is no truthful and deterministic mechanism that guarantees envy-freeness up to one item (EF1), even for two agents with additive valuations. Or, in cake-cutting, no truthful and deterministic mechanism always outputs a proportional allocation, even for two agents with piecewise-constant valuations. Our work stems from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this paper, we instead focus on Bayesian Incentive Compatible (BIC) mechanisms, requiring that agents are better off reporting the truth in expectation over other agents' reports. We prove that, when agents know a bit less about each other, a lot more is possible: using BIC mechanisms we can overcome the aforementioned barriers that DSIC mechanisms face in both the fundamental problems of allocation of indivisible goods and cake-cutting. We prove that this is the case even for an arbitrary number of agents, as long as the agents' priors about each others' types satisfy a neutrality condition. En route to our results on BIC mechanisms, we also strengthen the state of the art in terms of negative results for DSIC mechanisms.Comment: 26 page

    Bilirubin Restrains the Anticancer Effect of Vemurafenib on BRAF-Mutant Melanoma Cells Through ERK-MNK1 Signaling

    Get PDF
    Melanoma, the most threatening cancer in the skin, has been considered to be driven by the carcinogenic RAF-MEK1/2-ERK1/2 signaling pathway. This signaling pathway is usually mainly dysregulated by mutations in BRAF or RAS in skin melanomas. Although inhibitors targeting mutant BRAF, such as vemurafenib, have improved the clinical outcome of melanoma patients with BRAF mutations, the efficiency of vemurafenib is limited in many patients. Here, we show that blood bilirubin in patients with BRAF-mutant melanoma treated with vemurafenib is negatively correlated with clinical outcomes. In vitro and animal experiments show that bilirubin can abrogate vemurafenib-induced growth suppression of BRAF-mutant melanoma cells. Moreover, bilirubin can remarkably rescue vemurafenib-induced apoptosis. Mechanically, the activation of ERK-MNK1 axis is required for bilirubin-induced reversal effects post vemurafenib treatment. Our findings not only demonstrate that bilirubin is an unfavorable for patients with BRAF-mutant melanoma who received vemurafenib treatment, but also uncover the underlying mechanism by which bilirubin restrains the anticancer effect of vemurafenib on BRAF-mutant melanoma cells

    EFx Budget-Feasible Allocations with High Nash Welfare

    Full text link
    We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto efficient. We, therefore, turn to approximation and use the Nash social welfare maximizing allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items

    Optimal Metric Distortion with Predictions

    Full text link
    In the metric distortion problem there is a set of candidates and a set of voters, all residing in the same metric space. The objective is to choose a candidate with minimum social cost, defined as the total distance of the chosen candidate from all voters. The challenge is that the algorithm receives only ordinal input from each voter, in the form of a ranked list of candidates in non-decreasing order of their distances from her, whereas the objective function is cardinal. The distortion of an algorithm is its worst-case approximation factor with respect to the optimal social cost. A series of papers culminated in a 3-distortion algorithm, which is tight with respect to all deterministic algorithms. Aiming to overcome the limitations of worst-case analysis, we revisit the metric distortion problem through the learning-augmented framework, where the algorithm is provided with some prediction regarding the optimal candidate. The quality of this prediction is unknown, and the goal is to evaluate the performance of the algorithm under a accurate prediction (known as consistency), while simultaneously providing worst-case guarantees even for arbitrarily inaccurate predictions (known as robustness). For our main result, we characterize the robustness-consistency Pareto frontier for the metric distortion problem. We first identify an inevitable trade-off between robustness and consistency. We then devise a family of learning-augmented algorithms that achieves any desired robustness-consistency pair on this Pareto frontier. Furthermore, we provide a more refined analysis of the distortion bounds as a function of the prediction error (with consistency and robustness being two extremes). Finally, we also prove distortion bounds that integrate the notion of α\alpha-decisiveness, which quantifies the extent to which a voter prefers her favorite candidate relative to the rest.Comment: 29 pages, 10 figure
    corecore