65,608 research outputs found

    Random assignment with multi-unit demands

    Full text link
    We consider the multi-unit random assignment problem in which agents express preferences over objects and objects are allocated to agents randomly based on the preferences. The most well-established preference relation to compare random allocations of objects is stochastic dominance (SD) which also leads to corresponding notions of envy-freeness, efficiency, and weak strategyproofness. We show that there exists no rule that is anonymous, neutral, efficient and weak strategyproof. For single-unit random assignment, we show that there exists no rule that is anonymous, neutral, efficient and weak group-strategyproof. We then study a generalization of the PS (probabilistic serial) rule called multi-unit-eating PS and prove that multi-unit-eating PS satisfies envy-freeness, weak strategyproofness, and unanimity.Comment: 17 page

    Fair assignment of indivisible objects under ordinal preferences

    Full text link
    We consider the discrete assignment problem in which agents express ordinal preferences over objects and these objects are allocated to the agents in a fair manner. We use the stochastic dominance relation between fractional or randomized allocations to systematically define varying notions of proportionality and envy-freeness for discrete assignments. The computational complexity of checking whether a fair assignment exists is studied for these fairness notions. We also characterize the conditions under which a fair assignment is guaranteed to exist. For a number of fairness concepts, polynomial-time algorithms are presented to check whether a fair assignment exists. Our algorithmic results also extend to the case of unequal entitlements of agents. Our NP-hardness result, which holds for several variants of envy-freeness, answers an open question posed by Bouveret, Endriss, and Lang (ECAI 2010). We also propose fairness concepts that always suggest a non-empty set of assignments with meaningful fairness properties. Among these concepts, optimal proportionality and optimal weak proportionality appear to be desirable fairness concepts.Comment: extended version of a paper presented at AAMAS 201

    An information theory for preferences

    Full text link
    Recent literature in the last Maximum Entropy workshop introduced an analogy between cumulative probability distributions and normalized utility functions. Based on this analogy, a utility density function can de defined as the derivative of a normalized utility function. A utility density function is non-negative and integrates to unity. These two properties form the basis of a correspondence between utility and probability. A natural application of this analogy is a maximum entropy principle to assign maximum entropy utility values. Maximum entropy utility interprets many of the common utility functions based on the preference information needed for their assignment, and helps assign utility values based on partial preference information. This paper reviews maximum entropy utility and introduces further results that stem from the duality between probability and utility

    Cake Cutting Algorithms for Piecewise Constant and Piecewise Uniform Valuations

    Full text link
    Cake cutting is one of the most fundamental settings in fair division and mechanism design without money. In this paper, we consider different levels of three fundamental goals in cake cutting: fairness, Pareto optimality, and strategyproofness. In particular, we present robust versions of envy-freeness and proportionality that are not only stronger than their standard counter-parts but also have less information requirements. We then focus on cake cutting with piecewise constant valuations and present three desirable algorithms: CCEA (Controlled Cake Eating Algorithm), MEA (Market Equilibrium Algorithm) and CSD (Constrained Serial Dictatorship). CCEA is polynomial-time, robust envy-free, and non-wasteful. It relies on parametric network flows and recent generalizations of the probabilistic serial algorithm. For the subdomain of piecewise uniform valuations, we show that it is also group-strategyproof. Then, we show that there exists an algorithm (MEA) that is polynomial-time, envy-free, proportional, and Pareto optimal. MEA is based on computing a market-based equilibrium via a convex program and relies on the results of Reijnierse and Potters [24] and Devanur et al. [15]. Moreover, we show that MEA and CCEA are equivalent to mechanism 1 of Chen et. al. [12] for piecewise uniform valuations. We then present an algorithm CSD and a way to implement it via randomization that satisfies strategyproofness in expectation, robust proportionality, and unanimity for piecewise constant valuations. For the case of two agents, it is robust envy-free, robust proportional, strategyproof, and polynomial-time. Many of our results extend to more general settings in cake cutting that allow for variable claims and initial endowments. We also show a few impossibility results to complement our algorithms.Comment: 39 page

    Assortment optimisation under a general discrete choice model: A tight analysis of revenue-ordered assortments

    Full text link
    The assortment problem in revenue management is the problem of deciding which subset of products to offer to consumers in order to maximise revenue. A simple and natural strategy is to select the best assortment out of all those that are constructed by fixing a threshold revenue π\pi and then choosing all products with revenue at least π\pi. This is known as the revenue-ordered assortments strategy. In this paper we study the approximation guarantees provided by revenue-ordered assortments when customers are rational in the following sense: the probability of selecting a specific product from the set being offered cannot increase if the set is enlarged. This rationality assumption, known as regularity, is satisfied by almost all discrete choice models considered in the revenue management and choice theory literature, and in particular by random utility models. The bounds we obtain are tight and improve on recent results in that direction, such as for the Mixed Multinomial Logit model by Rusmevichientong et al. (2014). An appealing feature of our analysis is its simplicity, as it relies only on the regularity condition. We also draw a connection between assortment optimisation and two pricing problems called unit demand envy-free pricing and Stackelberg minimum spanning tree: These problems can be restated as assortment problems under discrete choice models satisfying the regularity condition, and moreover revenue-ordered assortments correspond then to the well-studied uniform pricing heuristic. When specialised to that setting, the general bounds we establish for revenue-ordered assortments match and unify the best known results on uniform pricing.Comment: Minor changes following referees' comment

    The Pareto Frontier for Random Mechanisms

    Full text link
    We study the trade-offs between strategyproofness and other desiderata, such as efficiency or fairness, that often arise in the design of random ordinal mechanisms. We use approximate strategyproofness to define manipulability, a measure to quantify the incentive properties of non-strategyproof mechanisms, and we introduce the deficit, a measure to quantify the performance of mechanisms with respect to another desideratum. When this desideratum is incompatible with strategyproofness, mechanisms that trade off manipulability and deficit optimally form the Pareto frontier. Our main contribution is a structural characterization of this Pareto frontier, and we present algorithms that exploit this structure to compute it. To illustrate its shape, we apply our results for two different desiderata, namely Plurality and Veto scoring, in settings with 3 alternatives and up to 18 agents.Comment: Working Pape

    Size versus truthfulness in the house allocation problem

    Get PDF
    We study the House Allocation problem (also known as the Assignment problem), i.e., the problem of allocating a set of objects among a set of agents, where each agent has ordinal preferences (possibly involving ties) over a subset of the objects. We focus on truthful mechanisms without monetary transfers for finding large Pareto optimal matchings. It is straightforward to show that no deterministic truthful mechanism can approximate a maximum cardinality Pareto optimal matching with ratio better than 2. We thus consider randomized mechanisms. We give a natural and explicit extension of the classical Random Serial Dictatorship Mechanism (RSDM) specifically for the House Allocation problem where preference lists can include ties. We thus obtain a universally truthful randomized mechanism for finding a Pareto optimal matching and show that it achieves an approximation ratio of eovere-1. The same bound holds even when agents have priorities (weights) and our goal is to find a maximum weight (as opposed to maximum cardinality) Pareto optimal matching. On the other hand we give a lower bound of 18 over 13 on the approximation ratio of any universally truthful Pareto optimal mechanism in settings with strict preferences. In the case that the mechanism must additionally be non-bossy, an improved lower bound of eovere-1 holds. This lower bound is tight given that RSDM for strict preference lists is non-bossy. We moreover interpret our problem in terms of the classical secretary problem and prove that our mechanism provides the best randomized strategy of the administrator who interviews the applicants
    • …
    corecore