49,170 research outputs found

    Fully Abstract Translations Between Functional Languages

    Get PDF
    We examine the problem of finding fully abstract translations between programming languages, i.e., translations that preserve code equivalence and nonequivalence. We present three examples of fully abstract translations: one from call-by-value to lazy PCF, one from call-by name to call-by-value PCF, and one from lazy to call-by-value PCF. The translations yield upper and lower bounds on decision procedures for proving equivalences of code. We finally define a notion of functional translation that captures the essence of the proofs of full abstraction, and show that some languages cannot be translated into others

    Adequacy of compositional translations for observational semantics

    Get PDF
    We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension

    On correctness of buffer implementations in a concurrent lambda calculus with futures

    Get PDF
    Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs

    Modular, Fully-abstract Compilation by Approximate Back-translation

    Full text link
    A compiler is fully-abstract if the compilation from source language programs to target language programs reflects and preserves behavioural equivalence. Such compilers have important security benefits, as they limit the power of an attacker interacting with the program in the target language to that of an attacker interacting with the program in the source language. Proving compiler full-abstraction is, however, rather complicated. A common proof technique is based on the back-translation of target-level program contexts to behaviourally-equivalent source-level contexts. However, constructing such a back- translation is problematic when the source language is not strong enough to embed an encoding of the target language. For instance, when compiling from STLC to ULC, the lack of recursive types in the former prevents such a back-translation. We propose a general and elegant solution for this problem. The key insight is that it suffices to construct an approximate back-translation. The approximation is only accurate up to a certain number of steps and conservative beyond that, in the sense that the context generated by the back-translation may diverge when the original would not, but not vice versa. Based on this insight, we describe a general technique for proving compiler full-abstraction and demonstrate it on a compiler from STLC to ULC. The proof uses asymmetric cross-language logical relations and makes innovative use of step-indexing to express the relation between a context and its approximate back-translation. The proof extends easily to common compiler patterns such as modular compilation and it, to the best of our knowledge, it is the first compiler full abstraction proof to have been fully mechanised in Coq. We believe this proof technique can scale to challenging settings and enable simpler, more scalable proofs of compiler full-abstraction

    Lexical typology through similarity semantics: Toward a semantic map of motion verbs

    Get PDF
    This paper discusses a multidimensional probabilistic semantic map of lexical motion verb stems based on data collected from parallel texts (viz. translations of the Gospel according to Mark) for 100 languages from all continents. The crosslinguistic diversity of lexical semantics in motion verbs is illustrated in detail for the domain of `go', `come', and `arrive' type contexts. It is argued that the theoretical bases underlying probabilistic semantic maps from exemplar data are the isomorphism hypothesis (given any two meanings and their corresponding forms in any particular language, more similar meanings are more likely to be expressed by the same form in any language), similarity semantics (similarity is more basic than identity), and exemplar semantics (exemplar meaning is more fundamental than abstract concepts)

    PLuTO: MT for online patent translation

    Get PDF
    PLuTO – Patent Language Translation Online – is a partially EU-funded commercialization project which specializes in the automatic retrieval and translation of patent documents. At the core of the PLuTO framework is a machine translation (MT) engine through which web-based translation services are offered. The fully integrated PLuTO architecture includes a translation engine coupling MT with translation memories (TM), and a patent search and retrieval engine. In this paper, we first describe the motivating factors behind the provision of such a service. Following this, we give an overview of the PLuTO framework as a whole, with particular emphasis on the MT components, and provide a real world use case scenario in which PLuTO MT services are exploited

    From ACT-ONE to Miranda, a Translation Experiment

    Get PDF
    It is now almost universally acknowledged that the data language ACT-ONE associated with the formal description technique LOTOS is inappropriate for the purpose of OSI formal description. In response to this the LOTOS restandardisation activity plans to replace ACT-ONE with a functional language. Thus, compatibility between ACT-ONE and the replacement data language becomes an issue. In response to this, we present an experimental investigation of backward compatibility between ACT-ONE and the new LOTOS data language. Specifically, we investigate translating ACT-ONE data types into the functional language Miranda. Miranda has been chosen as it is a widely used functional programming language and it is close in form to the anticipated new data language. This work serves as a ``verification of concept'' for translating ACT-ONE to the E-LOTOS data language. It identifies the bounds on embedding ACT-ONE in a functional data language. In particular, it indicates what can be translated and what cannot be translated. In addition, the paper reveals pertinent issues which can inform the E-LOTOS work. For example, which constructs are needed in E-LOTOS in order to support the class of data type specifications typically made in the LOTOS setting? We conclude with a number of specific recommendations for the E-LOTOS data language

    Maximal Sharing in the Lambda Calculus with letrec

    Full text link
    Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of `maximal compactness' for lambda-letrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. lambda-letrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a lambda-letrec-term into a maximally compact form; and deciding whether two lambda-letrec-terms are unfolding-equivalent. The transformation of a lambda-letrec-term LL into maximally compact form L0L_0 proceeds in three steps: (i) translate L into its term graph G=[[L]]G = [[ L ]]; (ii) compute the maximally shared form of GG as its bisimulation collapse G0G_0; (iii) read back a lambda-letrec-term L0L_0 from the term graph G0G_0 with the property [[L0]]=G0[[ L_0 ]] = G_0. This guarantees that L0L_0 and LL have the same unfolding, and that L0L_0 exhibits maximal sharing. The procedure for deciding whether two given lambda-letrec-terms L1L_1 and L2L_2 are unfolding-equivalent computes their term graph interpretations [[L1]][[ L_1 ]] and [[L2]][[ L_2 ]], and checks whether these term graphs are bisimilar. For illustration, we also provide a readily usable implementation.Comment: 18 pages, plus 19 pages appendi

    Analysing and Comparing Encodability Criteria

    Get PDF
    Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation) that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.Comment: In Proceedings EXPRESS/SOS 2015, arXiv:1508.06347. The Isabelle/HOL source files, and a full proof document, are available in the Archive of Formal Proofs, at http://afp.sourceforge.net/entries/Encodability_Process_Calculi.shtm
    • 

    corecore