29,131 research outputs found

    Bi-dimensional Composition with Domain Specific Languages

    Get PDF

    NLSC: Unrestricted Natural Language-based Service Composition through Sentence Embeddings

    Full text link
    Current approaches for service composition (assemblies of atomic services) require developers to use: (a) domain-specific semantics to formalize services that restrict the vocabulary for their descriptions, and (b) translation mechanisms for service retrieval to convert unstructured user requests to strongly-typed semantic representations. In our work, we argue that effort to developing service descriptions, request translations, and matching mechanisms could be reduced using unrestricted natural language; allowing both: (1) end-users to intuitively express their needs using natural language, and (2) service developers to develop services without relying on syntactic/semantic description languages. Although there are some natural language-based service composition approaches, they restrict service retrieval to syntactic/semantic matching. With recent developments in Machine learning and Natural Language Processing, we motivate the use of Sentence Embeddings by leveraging richer semantic representations of sentences for service description, matching and retrieval. Experimental results show that service composition development effort may be reduced by more than 44\% while keeping a high precision/recall when matching high-level user requests with low-level service method invocations.Comment: This paper will appear on SCC'19 (IEEE International Conference on Services Computing) on July 1

    Multilingual Models for Compositional Distributed Semantics

    Full text link
    We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data.Comment: Proceedings of ACL 2014 (Long papers

    Convolution, Separation and Concurrency

    Full text link
    A notion of convolution is presented in the context of formal power series together with lifting constructions characterising algebras of such series, which usually are quantales. A number of examples underpin the universality of these constructions, the most prominent ones being separation logics, where convolution is separating conjunction in an assertion quantale; interval logics, where convolution is the chop operation; and stream interval functions, where convolution is used for analysing the trajectories of dynamical or real-time systems. A Hoare logic is constructed in a generic fashion on the power series quantale, which applies to each of these examples. In many cases, commutative notions of convolution have natural interpretations as concurrency operations.Comment: 39 page

    An emotional mess! Deciding on a framework for building a Dutch emotion-annotated corpus

    Get PDF
    Seeing the myriad of existing emotion models, with the categorical versus dimensional opposition the most important dividing line, building an emotion-annotated corpus requires some well thought-out strategies concerning framework choice. In our work on automatic emotion detection in Dutch texts, we investigate this problem by means of two case studies. We find that the labels joy, love, anger, sadness and fear are well-suited to annotate texts coming from various domains and topics, but that the connotation of the labels strongly depends on the origin of the texts. Moreover, it seems that information is lost when an emotional state is forcedly classified in a limited set of categories, indicating that a bi-representational format is desirable when creating an emotion corpus.Seeing the myriad of existing emotion models, with the categorical versus dimensional opposition the most important dividing line, building an emotion-annotated corpus requires some well thought-out strategies concerning framework choice. In our work on automatic emotion detection in Dutch texts, we investigate this problem by means of two case studies. We find that the labels joy, love, anger, sadness and fear are well-suited to annotate texts coming from various domains and topics, but that the connotation of the labels strongly depends on the origin of the texts. Moreover, it seems that information is lost when an emotional state is forcedly classified in a limited set of categories, indicating that a bi-representational format is desirable when creating an emotion corpus.P

    Causal categories: relativistically interacting processes

    Full text link
    A symmetric monoidal category naturally arises as the mathematical structure that organizes physical systems, processes, and composition thereof, both sequentially and in parallel. This structure admits a purely graphical calculus. This paper is concerned with the encoding of a fixed causal structure within a symmetric monoidal category: causal dependencies will correspond to topological connectedness in the graphical language. We show that correlations, either classical or quantum, force terminality of the tensor unit. We also show that well-definedness of the concept of a global state forces the monoidal product to be only partially defined, which in turn results in a relativistic covariance theorem. Except for these assumptions, at no stage do we assume anything more than purely compositional symmetric-monoidal categorical structure. We cast these two structural results in terms of a mathematical entity, which we call a `causal category'. We provide methods of constructing causal categories, and we study the consequences of these methods for the general framework of categorical quantum mechanics.Comment: 43 pages, lots of figure

    The Algebraic View of Computation

    Full text link
    We argue that computation is an abstract algebraic concept, and a computer is a result of a morphism (a structure preserving map) from a finite universal semigroup.Comment: 13 pages, final version will be published elsewher
    • …
    corecore