19 research outputs found

    On the Learnability of Programming Language Semantics

    Get PDF
    This is the final version of the article. Available from ICE via the DOI in this record.Game semantics is a powerful method of semantic analysis for programming languages. It gives mathematically accurate models ("fully abstract") for a wide variety of programming languages. Game semantic models are combinatorial characterisations of all possible interactions between a term and its syntactic context. Because such interactions can be concretely represented as sets of sequences, it is possible to ask whether they can be learned from examples. Concretely, we are using long short-term memory neural nets (LSTM), a technique which proved effective in learning natural languages for automatic translation and text synthesis, to learn game-semantic models of sequential and concurrent versions of Idealised Algol (IA), which are algorithmically complex yet can be concisely described. We will measure how accurate the learned models are as a function of the degree of the term and the number of free variables involved. Finally, we will show how to use the learned model to perform latent semantic analysis between concurrent and sequential Idealised Algol

    On the Learnability of Programming Language Semantics

    Get PDF
    Game semantics is a powerful method of semantic analysis for programming languages. It gives mathematically accurate models ("fully abstract") for a wide variety of programming languages. Game semantic models are combinatorial characterisations of all possible interactions between a term and its syntactic context. Because such interactions can be concretely represented as sets of sequences, it is possible to ask whether they can be learned from examples. Concretely, we are using long short-term memory neural nets (LSTM), a technique which proved effective in learning natural languages for automatic translation and text synthesis, to learn game-semantic models of sequential and concurrent versions of Idealised Algol (IA), which are algorithmically complex yet can be concisely described. We will measure how accurate the learned models are as a function of the degree of the term and the number of free variables involved. Finally, we will show how to use the learned model to perform latent semantic analysis between concurrent and sequential Idealised Algol.Comment: In Proceedings ICE 2017, arXiv:1711.1070

    A parameterized halting problem, the linear time hierarchy, and the MRDP theorem

    Get PDF
    The complexity of the parameterized halting problem for nondeterministic Turing machines p-Halt is known to be related to the question of whether there are logics capturing various complexity classes [10]. Among others, if p-Halt is in para-AC0, the parameterized version of the circuit complexity class AC0, then AC0, or equivalently, (+, x)-invariant FO, has a logic. Although it is widely believed that p-Halt ∉. para-AC0, we show that the problem is hard to settle by establishing a connection to the question in classical complexity of whether NE ⊈ LINH. Here, LINH denotes the linear time hierarchy. On the other hand, we suggest an approach toward proving NE ⊈ LINH using bounded arithmetic. More specifically, we demonstrate that if the much celebrated MRDP (for Matiyasevich-Robinson-Davis-Putnam) theorem can be proved in a certain fragment of arithmetic, then NE ⊈ LINH. Interestingly, central to this result is a para-AC0 lower bound for the parameterized model-checking problem for FO on arithmetical structures.Peer ReviewedPostprint (author's final draft
    corecore