131 research outputs found

    Achieving interoperability through semantics-based technologies: the instant messaging case

    Get PDF
    The success of pervasive computing depends on the ability to compose a multitude of networked applications dynamically in order to achieve user goals. However, applications from different providers are not able to interoperate due to incompatible interaction protocols or disparate data models. Instant messaging is a representative example of the current situation, where various competing applications keep emerging. To enforce interoperability at runtime and in a non-intrusive manner, mediators are used to perform the necessary translations and coordination between the heterogeneous applications. Nevertheless, the design of mediators requires considerable knowledge about each application as well as a substantial development effort. In this paper we present an approach based on ontology reasoning and model checking in order to generate correct-by-construction mediators automatically. We demonstrate the feasibility of our approach through a prototype tool and show that it synthesises mediators that achieve efficient interoperation of instant messaging applications

    Invited Talk: Towards a Principled Multi-Language Infrastructure

    Get PDF
    AbstractSun's Java architecture introduced a safe virtual machine (VM) in which an ensemble of software components developed independently could smoothly interoperate. The goal of Microsoft's Common Language Runtime (CLR) is to generalize this approach and allow components in many source languages to interoperate safely. CLR supports flexible interoperation by compiling various source languages into a common intermediate language and by using a unified type system. However, the type system in CLR (and Java VM) enforces only conventional type safety in an object-oriented system. Therefore, higher-level specifications (e.g., resource bounds, generalized access control, formal software protocols) cannot be enforced. Because conventional type systems are too inflexible for real applications, developers often bypass the type system, producing code that steps outside the managed part of the VM; such components cannot be verified.At Yale we have been developing typed common intermediate languages (named FLINT) that can support safely not only the standard object-oriented model, but also higher-order generic (polymorphic) programming and Java-style reflection (introspection). Unlike CLR, our type system is independent of any particular programming model, yet it is capable of expressing all valid propositions and proofs in higher-order predicate logic (so it can be used to capture and verify advanced program properties). The rich type system of FLINT makes it possible to typecheck both compiler intermediate code and low level machine code; this allows typechecking to take place at any phase of compilation, even after optimizations and register allocation. It also leads to a smaller and more extensible VM because low-level native routines that would otherwise be in VM can now be verified and moved into a certified library. This talk describes our vision of the FLINT system, outline our approach to its design, and survey the technologies that can be brought to support its implementation

    Transitioning from structural to nominal code with efficient gradual typing

    Get PDF
    Gradual typing is a principled means for mixing typed and untyped code. But typed and untyped code often exhibit different programming patterns. There is already substantial research investigating gradually giving types to code exhibiting typical untyped patterns, and some research investigating gradually removing types from code exhibiting typical typed patterns. This paper investigates how to extend these established gradual-typing concepts to give formal guarantees not only about how to change types as code evolves but also about how to change such programming patterns as well. In particular, we explore mixing untyped "structural" code with typed "nominal" code in an object-oriented language. But whereas previous work only allowed "nominal" objects to be treated as "structural" objects, we also allow "structural" objects to dynamically acquire certain nominal types, namely interfaces. We present a calculus that supports such "cross-paradigm" code migration and interoperation in a manner satisfying both the static and dynamic gradual guarantees, and demonstrate that the calculus can be implemented efficiently

    Polymorphic Type Inference for the JNI

    Get PDF
    We present a multi-lingual type inference system for checking type safety of programs that use the Java Native Interface (JNI). The JNI uses specially-formatted strings to represent class and field names as well as method signatures, and so our type system tracks the flow of string constants through the program. Our system embeds string variables in types, and as those variables are resolved to string constants during inference they are replaced with the structured types the constants represent. This restricted form of dependent types allows us to directly assign type signatures to each of the more than 200 functions in the JNI. Moreover, it allows us to infer types for user-defined functions that are parameterized by Java type strings, which we have found to be common practice. Our inference system allows such functions to be treated polymorphically by using instantiation constraints, solved with semi-unification, at function calls. Finally, we have implemented our system and applied it to a small set of benchmarks. Although semi-unification is undecidable, we found our system to be scalable and effective in practice. We discovered 155 errors 36 cases of suspicious programming practices in our benchmarks

    On the Multi-Language Construction

    Get PDF
    Modern software is no more developed in a single programming language. Instead, programmers tend to exploit cross-language interoperability mechanisms to combine code stemming from different languages, and thus yielding fully-fledged multi-language programs. Whilst this approach enables developers to benefit from the strengths of each single-language, on the other hand it complicates the semantics of such programs. Indeed, the resulting multi-language does not meet any of the semantics of the combined languages. In this paper, we broaden the boundary functions-based approach a la Matthews and Findler to propose an algebraic framework that provides a constructive mathematical notion of multi-language able to determine its semantics. The aim of this work is to overcome the lack of a formal method (resp., model) to design (resp., represent) a multi-language, regardless of the inherent nature of the underlying languages. We show that our construction ensures the uniqueness of the semantic function (i.e., the multi-language semantics induced by the combined languages) by proving the initiality of the term model (i.e., the abstract syntax of the multi-language) in its category

    Verified Compilers for a Multi-Language World

    Get PDF
    Though there has been remarkable progress on formally verified compilers in recent years, most of these compilers suffer from a serious limitation: they are proved correct under the assumption that they will only be used to compile whole programs. This is an unrealistic assumption since most software systems today are comprised of components written in different languages - both typed and untyped - compiled by different compilers to a common target, as well as low-level libraries that may be handwritten in the target language. We are pursuing a new methodology for building verified compilers for today\u27s world of multi-language software. The project has two central themes, both of which stem from a view of compiler correctness as a language interoperability problem. First, to specify correctness of component compilation, we require that if a source component s compiles to target component t, then t linked with some arbitrary target code t\u27 should behave the same as s interoperating with t\u27. The latter demands a formal semantics of interoperability between the source and target languages. Second, to enable safe interoperability between components compiled from languages as different as ML, Rust, Python, and C, we plan to design a gradually type-safe target language based on LLVM that supports safe interoperability between more precisely typed, less precisely typed, and type-unsafe components. Our approach opens up a new avenue for exploring sensible language interoperability while also tackling compiler correctness

    An Emergent Micro-Services Approach to Digital Curation Infrastructure

    Full text link

    Proceedings of the 3rd Workshop on Domain-Specific Language Design and Implementation (DSLDI 2015)

    Full text link
    The goal of the DSLDI workshop is to bring together researchers and practitioners interested in sharing ideas on how DSLs should be designed, implemented, supported by tools, and applied in realistic application contexts. We are both interested in discovering how already known domains such as graph processing or machine learning can be best supported by DSLs, but also in exploring new domains that could be targeted by DSLs. More generally, we are interested in building a community that can drive forward the development of modern DSLs. These informal post-proceedings contain the submitted talk abstracts to the 3rd DSLDI workshop (DSLDI'15), and a summary of the panel discussion on Language Composition
    • …
    corecore