2,859 research outputs found

    Coordination of Dynamic Software Components with JavaBIP

    Get PDF
    JavaBIP allows the coordination of software components by clearly separating the functional and coordination aspects of the system behavior. JavaBIP implements the principles of the BIP component framework rooted in rigorous operational semantics. Recent work both on BIP and JavaBIP allows the coordination of static components defined prior to system deployment, i.e., the architecture of the coordinated system is fixed in terms of its component instances. Nevertheless, modern systems, often make use of components that can register and deregister dynamically during system execution. In this paper, we present an extension of JavaBIP that can handle this type of dynamicity. We use first-order interaction logic to define synchronization constraints based on component types. Additionally, we use directed graphs with edge coloring to model dependencies among components that determine the validity of an online system. We present the software architecture of our implementation, provide and discuss performance evaluation results.Comment: Technical report that accompanies the paper accepted at the 14th International Conference on Formal Aspects of Component Softwar

    Complexity analysis of reactive graph grammars

    Get PDF
    The aim of this paper is to present a way to calculate a complexity measurement of graph grammar specifications of reactive systems. The basic operation that describe the behavior of a graph grammar is a rule application. Therefore, this operation will be used to characterize the tasks to be performed within a system. The complexity measurement defined here ,vill give us the minimum numbei:· of steps that must be present in a computation that performs a desir_ed task

    A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs

    Get PDF
    A number of novel programming languages and libraries have been proposed that offer simpler-to-use models of concurrency than threads. It is challenging, however, to devise execution models that successfully realise their abstractions without forfeiting performance or introducing unintended behaviours. This is exemplified by SCOOP---a concurrent object-oriented message-passing language---which has seen multiple semantics proposed and implemented over its evolution. We propose a "semantics workbench" with fully and semi-automatic tools for SCOOP, that can be used to analyse and compare programs with respect to different execution models. We demonstrate its use in checking the consistency of semantics by applying it to a set of representative programs, and highlighting a deadlock-related discrepancy between the principal execution models of the language. Our workbench is based on a modular and parameterisable graph transformation semantics implemented in the GROOVE tool. We discuss how graph transformations are leveraged to atomically model intricate language abstractions, and how the visual yet algebraic nature of the model can be used to ascertain soundness.Comment: Accepted for publication in the proceedings of FASE 2016 (to appear

    Learning Language from a Large (Unannotated) Corpus

    Full text link
    A novel approach to the fully automated, unsupervised extraction of dependency grammars and associated syntax-to-semantic-relationship mappings from large text corpora is described. The suggested approach builds on the authors' prior work with the Link Grammar, RelEx and OpenCog systems, as well as on a number of prior papers and approaches from the statistical language learning literature. If successful, this approach would enable the mining of all the information needed to power a natural language comprehension and generation system, directly from a large, unannotated corpus.Comment: 29 pages, 5 figures, research proposa

    kLog: A Language for Logical and Relational Learning with Kernels

    Full text link
    We introduce kLog, a novel approach to statistical relational learning. Unlike standard approaches, kLog does not represent a probability distribution directly. It is rather a language to perform kernel-based learning on expressive logical and relational representations. kLog allows users to specify learning problems declaratively. It builds on simple but powerful concepts: learning from interpretations, entity/relationship data modeling, logic programming, and deductive databases. Access by the kernel to the rich representation is mediated by a technique we call graphicalization: the relational representation is first transformed into a graph --- in particular, a grounded entity/relationship diagram. Subsequently, a choice of graph kernel defines the feature space. kLog supports mixed numerical and symbolic data, as well as background knowledge in the form of Prolog or Datalog programs as in inductive logic programming systems. The kLog framework can be applied to tackle the same range of tasks that has made statistical relational learning so popular, including classification, regression, multitask learning, and collective classification. We also report about empirical comparisons, showing that kLog can be either more accurate, or much faster at the same level of accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at http://klog.dinfo.unifi.it along with tutorials
    corecore