10,139 research outputs found

    NLSC: Unrestricted Natural Language-based Service Composition through Sentence Embeddings

    Full text link
    Current approaches for service composition (assemblies of atomic services) require developers to use: (a) domain-specific semantics to formalize services that restrict the vocabulary for their descriptions, and (b) translation mechanisms for service retrieval to convert unstructured user requests to strongly-typed semantic representations. In our work, we argue that effort to developing service descriptions, request translations, and matching mechanisms could be reduced using unrestricted natural language; allowing both: (1) end-users to intuitively express their needs using natural language, and (2) service developers to develop services without relying on syntactic/semantic description languages. Although there are some natural language-based service composition approaches, they restrict service retrieval to syntactic/semantic matching. With recent developments in Machine learning and Natural Language Processing, we motivate the use of Sentence Embeddings by leveraging richer semantic representations of sentences for service description, matching and retrieval. Experimental results show that service composition development effort may be reduced by more than 44\% while keeping a high precision/recall when matching high-level user requests with low-level service method invocations.Comment: This paper will appear on SCC'19 (IEEE International Conference on Services Computing) on July 1

    The Synonym management process in SAREL

    Get PDF
    The specification phase is one of the most important and least supported parts of the software development process. The SAREL system has been conceived as a knowledge-based tool to improve the specification phase. The purpose of SAREL (Assistance System for Writing Software Specifications in Natural Language) is to assist engineers in the creation of software specifications written in Natural Language (NL). These documents are divided into several parts. We can distinguish the Introduction and the Overall Description as parts that should be used in the Knowledge Base construction. The information contained in the Specific Requirements Section corresponds to the information represented in the Requirements Base. In order to obtain high-quality software requirements specification the writing norms that define the linguistic restrictions required and the software engineering constraints related to the quality factors have been taken into account. One of the controls performed is the lexical analysis that verifies the words belong to the application domain lexicon which consists of the Required and the Extended lexicon. In this sense a synonym management process is needed in order to get a quality software specification. The aim of this paper is to present the synonym management process performed during the Knowledge Base construction. Such process makes use of the Spanish Wordnet developed inside the Eurowordnet project. This process generates both the Required lexicon and the Extended lexicon that will be used during the Requirements Base construction.Postprint (published version

    Speech Synthesis Based on Hidden Markov Models

    Get PDF

    Proceedings of the Workshop Semantic Content Acquisition and Representation (SCAR) 2007

    Get PDF
    This is the proceedings of the Workshop on Semantic Content Acquisition and Representation, held in conjunction with NODALIDA 2007, on May 24 2007 in Tartu, Estonia.</p

    A broad-coverage distributed connectionist model of visual word recognition

    Get PDF
    In this study we describe a distributed connectionist model of morphological processing, covering a realistically sized sample of the English language. The purpose of this model is to explore how effects of discrete, hierarchically structured morphological paradigms, can arise as a result of the statistical sub-regularities in the mapping between word forms and word meanings. We present a model that learns to produce at its output a realistic semantic representation of a word, on presentation of a distributed representation of its orthography. After training, in three experiments, we compare the outputs of the model with the lexical decision latencies for large sets of English nouns and verbs. We show that the model has developed detailed representations of morphological structure, giving rise to effects analogous to those observed in visual lexical decision experiments. In addition, we show how the association between word form and word meaning also give rise to recently reported differences between regular and irregular verbs, even in their completely regular present-tense forms. We interpret these results as underlining the key importance for lexical processing of the statistical regularities in the mappings between form and meaning
    corecore