909 research outputs found

    Payoff-Based Dynamics for Multiplayer Weakly Acyclic Games

    Get PDF
    We consider repeated multiplayer games in which players repeatedly and simultaneously choose strategies from a finite set of available strategies according to some strategy adjustment process. We focus on the specific class of weakly acyclic games, which is particularly relevant for multiagent cooperative control problems. A strategy adjustment process determines how players select their strategies at any stage as a function of the information gathered over previous stages. Of particular interest are “payoff-based” processes in which, at any stage, players know only their own actions and (noise corrupted) payoffs from previous stages. In particular, players do not know the actions taken by other players and do not know the structural form of payoff functions. We introduce three different payoff-based processes for increasingly general scenarios and prove that, after a sufficiently large number of stages, player actions constitute a Nash equilibrium at any stage with arbitrarily high probability. We also show how to modify player utility functions through tolls and incentives in so-called congestion games, a special class of weakly acyclic games, to guarantee that a centralized objective can be realized as a Nash equilibrium. We illustrate the methods with a simulation of distributed routing over a network

    Liveness-Based Garbage Collection for Lazy Languages

    Full text link
    We consider the problem of reducing the memory required to run lazy first-order functional programs. Our approach is to analyze programs for liveness of heap-allocated data. The result of the analysis is used to preserve only live data---a subset of reachable data---during garbage collection. The result is an increase in the garbage reclaimed and a reduction in the peak memory requirement of programs. While this technique has already been shown to yield benefits for eager first-order languages, the lack of a statically determinable execution order and the presence of closures pose new challenges for lazy languages. These require changes both in the liveness analysis itself and in the design of the garbage collector. To show the effectiveness of our method, we implemented a copying collector that uses the results of the liveness analysis to preserve live objects, both evaluated (i.e., in WHNF) and closures. Our experiments confirm that for programs running with a liveness-based garbage collector, there is a significant decrease in peak memory requirements. In addition, a sizable reduction in the number of collections ensures that in spite of using a more complex garbage collector, the execution times of programs running with liveness and reachability-based collectors remain comparable

    The Cost of Stability in Coalitional Games

    Get PDF
    A key question in cooperative game theory is that of coalitional stability, usually captured by the notion of the \emph{core}--the set of outcomes such that no subgroup of players has an incentive to deviate. However, some coalitional games have empty cores, and any outcome in such a game is unstable. In this paper, we investigate the possibility of stabilizing a coalitional game by using external payments. We consider a scenario where an external party, which is interested in having the players work together, offers a supplemental payment to the grand coalition (or, more generally, a particular coalition structure). This payment is conditional on players not deviating from their coalition(s). The sum of this payment plus the actual gains of the coalition(s) may then be divided among the agents so as to promote stability. We define the \emph{cost of stability (CoS)} as the minimal external payment that stabilizes the game. We provide general bounds on the cost of stability in several classes of games, and explore its algorithmic properties. To develop a better intuition for the concepts we introduce, we provide a detailed algorithmic study of the cost of stability in weighted voting games, a simple but expressive class of games which can model decision-making in political bodies, and cooperation in multiagent settings. Finally, we extend our model and results to games with coalition structures.Comment: 20 pages; will be presented at SAGT'0

    Stream Fusion, to Completeness

    Full text link
    Stream processing is mainstream (again): Widely-used stream libraries are now available for virtually all modern OO and functional languages, from Java to C# to Scala to OCaml to Haskell. Yet expressivity and performance are still lacking. For instance, the popular, well-optimized Java 8 streams do not support the zip operator and are still an order of magnitude slower than hand-written loops. We present the first approach that represents the full generality of stream processing and eliminates overheads, via the use of staging. It is based on an unusually rich semantic model of stream interaction. We support any combination of zipping, nesting (or flat-mapping), sub-ranging, filtering, mapping-of finite or infinite streams. Our model captures idiosyncrasies that a programmer uses in optimizing stream pipelines, such as rate differences and the choice of a "for" vs. "while" loops. Our approach delivers hand-written-like code, but automatically. It explicitly avoids the reliance on black-box optimizers and sufficiently-smart compilers, offering highest, guaranteed and portable performance. Our approach relies on high-level concepts that are then readily mapped into an implementation. Accordingly, we have two distinct implementations: an OCaml stream library, staged via MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we derive libraries richer and simultaneously many tens of times faster than past work. We greatly exceed in performance the standard stream libraries available in Java, Scala and OCaml, including the well-optimized Java 8 streams

    A principled approach to programming with nested types in Haskell

    Get PDF
    Initial algebra semantics is one of the cornerstones of the theory of modern functional programming languages. For each inductive data type, it provides a Church encoding for that type, a build combinator which constructs data of that type, a fold combinator which encapsulates structured recursion over data of that type, and a fold/build rule which optimises modular programs by eliminating from them data constructed using the buildcombinator, and immediately consumed using the foldcombinator, for that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar foundation for programming with nested types in Haskell. Specifically, the standard folds derived from initial algebra semantics have been considered too weak to capture commonly occurring patterns of recursion over data of nested types in Haskell, and no build combinators or fold/build rules have until now been defined for nested types. This paper shows that standard folds are, in fact, sufficiently expressive for programming with nested types in Haskell. It also defines buildcombinators and fold/build fusion rules for nested types. It thus shows how initial algebra semantics provides a principled, expressive, and elegant foundation for programming with nested types in Haskell

    The Predictive Link between Matrix and Metastasis

    Get PDF
    Cancer spread (metastasis) is responsible for 90% of cancer-related fatalities. Informing patient treatment to prevent metastasis, or kill all cancer cells in a patient\u27s body before it becomes metastatic is extremely powerful. However, aggressive treatment for all non-metastatic patients is detrimental, both for quality of life concerns, and the risk of kidney or liver-related toxicity. Knowing when and where a patient has metastatic risk could revolutionize patient treatment and care. In this review, we attempt to summarize the key work of engineers and quantitative biologists in developing strategies and model systems to predict metastasis, with a particular focus on cell interactions with the extracellular matrix (ECM), as a tool to predict metastatic risk and tropism

    Maximal Sharing in the Lambda Calculus with letrec

    Full text link
    Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of `maximal compactness' for lambda-letrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. lambda-letrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a lambda-letrec-term into a maximally compact form; and deciding whether two lambda-letrec-terms are unfolding-equivalent. The transformation of a lambda-letrec-term LL into maximally compact form L0L_0 proceeds in three steps: (i) translate L into its term graph G=[[L]]G = [[ L ]]; (ii) compute the maximally shared form of GG as its bisimulation collapse G0G_0; (iii) read back a lambda-letrec-term L0L_0 from the term graph G0G_0 with the property [[L0]]=G0[[ L_0 ]] = G_0. This guarantees that L0L_0 and LL have the same unfolding, and that L0L_0 exhibits maximal sharing. The procedure for deciding whether two given lambda-letrec-terms L1L_1 and L2L_2 are unfolding-equivalent computes their term graph interpretations [[L1]][[ L_1 ]] and [[L2]][[ L_2 ]], and checks whether these term graphs are bisimilar. For illustration, we also provide a readily usable implementation.Comment: 18 pages, plus 19 pages appendi
    corecore