14,759 research outputs found

    An Algorithmic Framework for Strategic Fair Division

    Full text link
    We study the paradigmatic fair division problem of allocating a divisible good among agents with heterogeneous preferences, commonly known as cake cutting. Classical cake cutting protocols are susceptible to manipulation. Do their strategic outcomes still guarantee fairness? To address this question we adopt a novel algorithmic approach, by designing a concrete computational framework for fair division---the class of Generalized Cut and Choose (GCC) protocols}---and reasoning about the game-theoretic properties of algorithms that operate in this model. The class of GCC protocols includes the most important discrete cake cutting protocols, and turns out to be compatible with the study of fair division among strategic agents. In particular, GCC protocols are guaranteed to have approximate subgame perfect Nash equilibria, or even exact equilibria if the protocol's tie-breaking rule is flexible. We further observe that the (approximate) equilibria of proportional GCC protocols---which guarantee each of the nn agents a 1/n1/n-fraction of the cake---must be (approximately) proportional. Finally, we design a protocol in this framework with the property that its Nash equilibrium allocations coincide with the set of (contiguous) envy-free allocations

    An Alloy Verification Model for Consensus-Based Auction Protocols

    Full text link
    Max Consensus-based Auction (MCA) protocols are an elegant approach to establish conflict-free distributed allocations in a wide range of network utility maximization problems. A set of agents independently bid on a set of items, and exchange their bids with their first hop-neighbors for a distributed (max-consensus) winner determination. The use of MCA protocols was proposed, e.g.e.g., to solve the task allocation problem for a fleet of unmanned aerial vehicles, in smart grids, or in distributed virtual network management applications. Misconfigured or malicious agents participating in a MCA, or an incorrect instantiation of policies can lead to oscillations of the protocol, causing, e.g.e.g., Service Level Agreement (SLA) violations. In this paper, we propose a formal, machine-readable, Max-Consensus Auction model, encoded in the Alloy lightweight modeling language. The model consists of a network of agents applying the MCA mechanisms, instantiated with potentially different policies, and a set of predicates to analyze its convergence properties. We were able to verify that MCA is not resilient against rebidding attacks, and that the protocol fails (to achieve a conflict-free resource allocation) for some specific combinations of policies. Our model can be used to verify, with a "push-button" analysis, the convergence of the MCA mechanism to a conflict-free allocation of a wide range of policy instantiations

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Parameterized Synthesis with Safety Properties

    Full text link
    Parameterized synthesis offers a solution to the problem of constructing correct and verified controllers for parameterized systems. Such systems occur naturally in practice (e.g., in the form of distributed protocols where the amount of processes is often unknown at design time and the protocol must work regardless of the number of processes). In this paper, we present a novel learning based approach to the synthesis of reactive controllers for parameterized systems from safety specifications. We use the framework of regular model checking to model the synthesis problem as an infinite-duration two-player game and show how one can utilize Angluin's well-known L* algorithm to learn correct-by-design controllers. This approach results in a synthesis procedure that is conceptually simpler than existing synthesis methods with a completeness guarantee, whenever a winning strategy can be expressed by a regular set. We have implemented our algorithm in a tool called L*-PSynth and have demonstrated its performance on a range of benchmarks, including robotic motion planning and distributed protocols. Despite the simplicity of L*-PSynth it competes well against (and in many cases even outperforms) the state-of-the-art tools for synthesizing parameterized systems.Comment: 18 page

    Model Based Development of Quality-Aware Software Services

    Get PDF
    Modelling languages and development frameworks give support for functional and structural description of software architectures. But quality-aware applications require languages which allow expressing QoS as a first-class concept during architecture design and service composition, and to extend existing tools and infrastructures adding support for modelling, evaluating, managing and monitoring QoS aspects. In addition to its functional behaviour and internal structure, the developer of each service must consider the fulfilment of its quality requirements. If the service is flexible, the output quality depends both on input quality and available resources (e.g., amounts of CPU execution time and memory). From the software engineering point of view, modelling of quality-aware requirements and architectures require modelling support for the description of quality concepts, support for the analysis of quality properties (e.g. model checking and consistencies of quality constraints, assembly of quality), tool support for the transition from quality requirements to quality-aware architectures, and from quality-aware architecture to service run-time infrastructures. Quality management in run-time service infrastructures must give support for handling quality concepts dynamically. QoS-aware modeling frameworks and QoS-aware runtime management infrastructures require a common evolution to get their integration
    • 

    corecore