150 research outputs found

    Two-player Domino games

    Full text link
    We introduce a 2-player game played on an infinite grid, initially empty, where each player in turn chooses a vertex and colours it. The first player aims to create some pattern from a target set, while the second player aims to prevent it. We study the problem of deciding which player wins, and prove that it is undecidable. We also consider a variant where the turn order is not alternating but given by a balanced word, and we characterise the decidable and undecidable cases.Comment: Submitted to Computability in Europe 202

    Behavioural Economics: Classical and Modern

    Get PDF
    In this paper, the origins and development of behavioural economics, beginning with the pioneering works of Herbert Simon (1953) and Ward Edwards (1954), is traced, described and (critically) discussed, in some detail. Two kinds of behavioural economics – classical and modern – are attributed, respectively, to the two pioneers. The mathematical foundations of classical behavioural economics is identified, largely, to be in the theory of computation and computational complexity; the corresponding mathematical basis for modern behavioural economics is, on the other hand, claimed to be a notion of subjective probability (at least at its origins in the works of Ward Edwards). The economic theories of behavior, challenging various aspects of 'orthodox' theory, were decisively influenced by these two mathematical underpinnings of the two theoriesClassical Behavioural Economics, Modern Behavioural Economics, Subjective Probability, Model of Computation, Computational Complexity. Subjective Expected Utility

    In the beginning was game semantics

    Full text link
    This article presents an overview of computability logic -- the game-semantically constructed logic of interactive computational tasks and resources. There is only one non-overview, technical section in it, devoted to a proof of the soundness of affine logic with respect to the semantics of computability logic. A comprehensive online source on the subject can be found at http://www.cis.upenn.edu/~giorgi/cl.htmlComment: To appear in: "Games: Unifying Logic, Language and Philosophy". O. Majer, A.-V. Pietarinen and T. Tulenheimo, eds. Springer Verlag, Berli

    Machine intelligence: a chimera

    Get PDF

    Aristotle, Leopardi, Severino: the Endless Game of Nothingness

    Get PDF
    As Aristotle knew all too well, not being is an equivocal concept. This indeterminate character of nothingness turns out to be the main enemy of the principle of non-contradiction, especially due to its affnity to «chimeras» and poetic metaphors (Leopardi's «things that are notthings»). There is an age old philosophical debate about nothingness, at times to defend the reasons for the eternity of being, at others to disprove them. In particular, the work of Emanuele Severino throws some light on the dispute between two giants of thought, Aristotle and Leopardi, with whom the neo-Parmenidean philosopher debated from an impartial position. The article provides food for thought in support of the indefinite and disturbing character - positive, yet apocalyptic -“ of nothingness. What emerges is the ability of not being to resist both the univocal idea of nihil absolutum, as well as to the closure of a game in which the destiny of beings and the very sense of time remain at stake

    Global computing:Learning the lessons from initiatives abroad

    Get PDF

    The Parallelism Tradeoff: Limitations of Log-Precision Transformers

    Full text link
    Despite their omnipresence in modern NLP, characterizing the computational power of transformer neural nets remains an interesting open question. We prove that transformers whose arithmetic precision is logarithmic in the number of input tokens (and whose feedforward nets are computable using space linear in their input) can be simulated by constant-depth logspace-uniform threshold circuits. This provides insight on the power of transformers using known results in complexity theory. For example, if L≠P\mathsf L \neq \mathsf P (i.e., not all poly-time problems can be solved using logarithmic space), then transformers cannot even accurately solve linear equalities or check membership in an arbitrary context-free grammar with empty productions. Our result intuitively emerges from the transformer architecture's high parallelizability. We thus speculatively introduce the idea of a fundamental parallelism tradeoff: any model architecture as parallelizable as the transformer will obey limitations similar to it. Since parallelism is key to training models at massive scale, this suggests a potential inherent weakness of the scaling paradigm.Comment: Accepted at TACL. Formerly entitled "Log-Precision Transformers are Constant-Depth Threshold Circuits". Updated with minor corrections in Section 2 (Implications) on March 6, 2023. Update with minor edits to the proof of Lemma 3 on April 26, 202
    • …
    corecore