789 research outputs found

    Proof-theoretic Analysis of Rationality for Strategic Games with Arbitrary Strategy Sets

    Full text link
    In the context of strategic games, we provide an axiomatic proof of the statement Common knowledge of rationality implies that the players will choose only strategies that survive the iterated elimination of strictly dominated strategies. Rationality here means playing only strategies one believes to be best responses. This involves looking at two formal languages. One is first-order, and is used to formalise optimality conditions, like avoiding strictly dominated strategies, or playing a best response. The other is a modal fixpoint language with expressions for optimality, rationality and belief. Fixpoints are used to form expressions for common belief and for iterated elimination of non-optimal strategies.Comment: 16 pages, Proc. 11th International Workshop on Computational Logic in Multi-Agent Systems (CLIMA XI). To appea

    Reasoning about termination of pure Prolog programs

    Get PDF

    Efficient Equilibria in Polymatrix Coordination Games

    Get PDF
    We consider polymatrix coordination games with individual preferences where every player corresponds to a node in a graph who plays with each neighbor a separate bimatrix game with non-negative symmetric payoffs. In this paper, we study α\alpha-approximate kk-equilibria of these games, i.e., outcomes where no group of at most kk players can deviate such that each member increases his payoff by at least a factor α\alpha. We prove that for α2\alpha \ge 2 these games have the finite coalitional improvement property (and thus α\alpha-approximate kk-equilibria exist), while for α<2\alpha < 2 this property does not hold. Further, we derive an almost tight bound of 2α(n1)/(k1)2\alpha(n-1)/(k-1) on the price of anarchy, where nn is the number of players; in particular, it scales from unbounded for pure Nash equilibria (k=1)k = 1) to 2α2\alpha for strong equilibria (k=nk = n). We also settle the complexity of several problems related to the verification and existence of these equilibria. Finally, we investigate natural means to reduce the inefficiency of Nash equilibria. Most promisingly, we show that by fixing the strategies of kk players the price of anarchy can be reduced to n/kn/k (and this bound is tight)

    The effect of adjusting LDL-cholesterol for Lp(a)-cholesterol on the diagnosis of familial hypercholesterolaemia

    Get PDF
    BACKGROUND: Familial hypercholesterolaemia (FH) diagnostic tools help prioritise patients for genetic testing and include LDL-C estimates commonly calculated using the Friedewald equation. However, cholesterol contributions from lipoprotein(a) (Lp(a)) can overestimate 'true' LDL-C, leading to potentially inappropriate clinical FH diagnosis. OBJECTIVE: To assess how adjusting LDL-C for Lp(a)-cholesterol affects FH diagnoses using Simon Broome (SB) and Dutch Lipid Clinic Network (DLCN) criteria. METHODS: Adults referred to a tertiary lipid clinic in London, UK were included if they had undergone FH genetic testing based on SB or DLCN criteria. LDL-C was adjusted for Lp(a)-cholesterol using estimated cholesterol contents of 17.3%, 30% and 45%, and the effects of these adjustments on reclassification to 'unlikely' FH and diagnostic accuracy were determined. RESULTS: Depending on the estimated cholesterol content applied, LDL-C adjustment reclassified 8-23% and 6-17% of patients to 'unlikely' FH using SB and DLCN criteria, respectively. The highest reclassification rates were observed following 45% adjustment in mutation-negative patients with higher Lp(a) levels. This led to an improvement in diagnostic accuracy (46% to 57% with SB, and 32% to 44% with DLCN following 45% adjustment) through increased specificity. However all adjustment factors led to erroneous reclassification of mutation-positive patients to 'unlikely' FH. CONCLUSION: LDL-C adjustment for Lp(a)-cholesterol improves the accuracy of clinical FH diagnostic tools. Adopting this approach would reduce unnecessary genetic testing but also incorrectly reclassify mutation-positive patients. Health economic analysis is needed to balance the risks of over- and under-diagnosis before LDL-C adjustments for Lp(a) can be recommended

    Acceptability with general orderings

    Full text link
    We present a new approach to termination analysis of logic programs. The essence of the approach is that we make use of general orderings (instead of level mappings), like it is done in transformational approaches to logic program termination analysis, but we apply these orderings directly to the logic program and not to the term-rewrite system obtained through some transformation. We define some variants of acceptability, based on general orderings, and show how they are equivalent to LD-termination. We develop a demand driven, constraint-based approach to verify these acceptability-variants. The advantage of the approach over standard acceptability is that in some cases, where complex level mappings are needed, fairly simple orderings may be easily generated. The advantage over transformational approaches is that it avoids the transformation step all together. {\bf Keywords:} termination analysis, acceptability, orderings.Comment: To appear in "Computational Logic: From Logic Programming into the Future

    Specializing Interpreters using Offline Partial Deduction

    No full text
    We present the latest version of the Logen partial evaluation system for logic programs. In particular we present new binding-types, and show how they can be used to effectively specialise a wide variety of interpreters.We show how to achieve Jones-optimality in a systematic way for several interpreters. Finally, we present and specialise a non-trivial interpreter for a small functional programming language. Experimental results are also presented, highlighting that the Logen system can be a good basis for generating compilers for high-level languages
    corecore