411 research outputs found

    Allocation in Practice

    Full text link
    How do we allocate scarcere sources? How do we fairly allocate costs? These are two pressing challenges facing society today. I discuss two recent projects at NICTA concerning resource and cost allocation. In the first, we have been working with FoodBank Local, a social startup working in collaboration with food bank charities around the world to optimise the logistics of collecting and distributing donated food. Before we can distribute this food, we must decide how to allocate it to different charities and food kitchens. This gives rise to a fair division problem with several new dimensions, rarely considered in the literature. In the second, we have been looking at cost allocation within the distribution network of a large multinational company. This also has several new dimensions rarely considered in the literature.Comment: To appear in Proc. of 37th edition of the German Conference on Artificial Intelligence (KI 2014), Springer LNC

    An Algorithmic Framework for Strategic Fair Division

    Full text link
    We study the paradigmatic fair division problem of allocating a divisible good among agents with heterogeneous preferences, commonly known as cake cutting. Classical cake cutting protocols are susceptible to manipulation. Do their strategic outcomes still guarantee fairness? To address this question we adopt a novel algorithmic approach, by designing a concrete computational framework for fair division---the class of Generalized Cut and Choose (GCC) protocols}---and reasoning about the game-theoretic properties of algorithms that operate in this model. The class of GCC protocols includes the most important discrete cake cutting protocols, and turns out to be compatible with the study of fair division among strategic agents. In particular, GCC protocols are guaranteed to have approximate subgame perfect Nash equilibria, or even exact equilibria if the protocol's tie-breaking rule is flexible. We further observe that the (approximate) equilibria of proportional GCC protocols---which guarantee each of the nn agents a 1/n1/n-fraction of the cake---must be (approximately) proportional. Finally, we design a protocol in this framework with the property that its Nash equilibrium allocations coincide with the set of (contiguous) envy-free allocations

    Understanding collaboration in volunteer computing systems

    Get PDF
    Volunteer computing is a paradigm in which devices participating in a distributed environment share part of their resources to help others perform their activities. The effectiveness of this computing paradigm depends on the collaboration attitude adopted by the participating devices. Unfortunately for software designers it is not clear how to contribute with local resources to the shared environment without compromising resources that could then be required by the contributors. Therefore, many designers adopt a conservative position when defining the collaboration strategy to be embedded in volunteer computing applications. This position produces an underutilization of the devices’ local resources and reduces the effectiveness of these solutions. This article presents a study that helps designers understand the impact of adopting a particular collaboration attitude to contribute with local resources to the distributed shared environment. The study considers five collaboration strategies, which are analyzed in computing environments with both, abundance and scarcity of resources. The obtained results indicate that collaboration strategies based on effort-based incentives work better than those using contribution-based incentives. These results also show that the use of effort-based incentives does not jeopardize the availability of local resources for the local needs.Peer ReviewedPostprint (published version

    Computing Stable Coalitions: Approximation Algorithms for Reward Sharing

    Full text link
    Consider a setting where selfish agents are to be assigned to coalitions or projects from a fixed set P. Each project k is characterized by a valuation function; v_k(S) is the value generated by a set S of agents working on project k. We study the following classic problem in this setting: "how should the agents divide the value that they collectively create?". One traditional approach in cooperative game theory is to study core stability with the implicit assumption that there are infinite copies of one project, and agents can partition themselves into any number of coalitions. In contrast, we consider a model with a finite number of non-identical projects; this makes computing both high-welfare solutions and core payments highly non-trivial. The main contribution of this paper is a black-box mechanism that reduces the problem of computing a near-optimal core stable solution to the purely algorithmic problem of welfare maximization; we apply this to compute an approximately core stable solution that extracts one-fourth of the optimal social welfare for the class of subadditive valuations. We also show much stronger results for several popular sub-classes: anonymous, fractionally subadditive, and submodular valuations, as well as provide new approximation algorithms for welfare maximization with anonymous functions. Finally, we establish a connection between our setting and the well-studied simultaneous auctions with item bidding; we adapt our results to compute approximate pure Nash equilibria for these auctions.Comment: Under Revie

    A Dynamic Allocation Mechanism for Network Slicing as-a-Service

    Get PDF
    In my thesis, I explore the design of a market mechanism to socially efficiently allocate resources for network slicing as-a-Service. Network slicing is a novel usage concept for the upcoming 5G network standard, allowing for isolated and customized virtual networks to operate upon a larger, physical 5G network. By providing network slices as-a-Service, where the users of the network slice do not own any of the underlying resources, a larger range of use cases can be catered to. My market mechanism is a novel amalgamation of existing mechanism design solutions from economics, and the nascent computer science literature into the technical aspects of network slicing and underlying network virtualization concepts. The existing literature in computer science is focused on the operative aspects of network slicing, while economics literature is incompatible with the unique problems network slicing poses as a market. In this thesis, I bring these two strands of literature together to create a functional allocation mechanism for the network slice market. I successfully create this market mechanism in my thesis, which is split into three phases. The first phase allows for bidder input into the network slices they bid for, overcoming a trade-off between market efficiency and tractability, making truthful valuation Bayes-Nash optimal. The second phase allocates resources to bidders based on a modified VCG mechanism that forms the multiple, non-identical resources of the market into packages that are based on bidder Quality of Service demands. Allocation is optimized to be socially efficient. The third phase re-allocates vacant resources of entitled network slices according to a Generalized Second-Price auction, while allowing for the return of resources to these entitled network slices without service interruption. As a whole, the mechanism is designed to optimize the allocation of resources as much as possible to those users that create the greatest value out of them, and successfully does so
    • …
    corecore