4,027 research outputs found

    An Institutional Framework for Heterogeneous Formal Development in UML

    Get PDF
    We present a framework for formal software development with UML. In contrast to previous approaches that equip UML with a formal semantics, we follow an institution based heterogeneous approach. This can express suitable formal semantics of the different UML diagram types directly, without the need to map everything to one specific formalism (let it be first-order logic or graph grammars). We show how different aspects of the formal development process can be coherently formalised, ranging from requirements over design and Hoare-style conditions on code to the implementation itself. The framework can be used to verify consistency of different UML diagrams both horizontally (e.g., consistency among various requirements) as well as vertically (e.g., correctness of design or implementation w.r.t. the requirements)

    Towards rule-based visual programming of generic visual systems

    Full text link
    This paper illustrates how the diagram programming language DiaPlan can be used to program visual systems. DiaPlan is a visual rule-based language that is founded on the computational model of graph transformation. The language supports object-oriented programming since its graphs are hierarchically structured. Typing allows the shape of these graphs to be specified recursively in order to increase program security. Thanks to its genericity, DiaPlan allows to implement systems that represent and manipulate data in arbitrary diagram notations. The environment for the language exploits the diagram editor generator DiaGen for providing genericity, and for implementing its user interface and type checker.Comment: 15 pages, 16 figures contribution to the First International Workshop on Rule-Based Programming (RULE'2000), September 19, 2000, Montreal, Canad

    Feature-based and Model-based Semantics for English, French and German Verb Phrases

    Get PDF
    This paper considers the relative merits of using features and formal event models to characterise the semantics of English, French and German verb phrases, and con- siders the application of such semantics in machine translation. The feature-based ap- proach represents the semantics in terms of feature systems, which have been widely used in computational linguistics for representing complex syntactic structures. The paper shows how a simple intuitive semantics of verb phrases may be encoded as a feature system, and how this can be used to support modular construction of au- tomatic translation systems through feature look-up tables. This is illustrated by automated translation of English into either French or German. The paper contin- ues to formalise the feature-based approach via a model-based, Montague semantics, which extends previous work on the semantics of English verb phrases. In so doing, repercussions of and to this framework in conducting a contrastive semantic study are considered. The model-based approach also promises to provide support for a more sophisticated approach to translation through logical proof; the paper indicates further work required for the fulfilment of this promise

    The multi-lingual database system

    Get PDF
    In the past, the design and implementation of a database system has followed a rather conventional approach. First, a specific data model for the database system is chosen. Second, a corresponding model-based data language is then specified. The result of this traditional approach to the database-system development is a mono-lingual database system where the user sees and uses the database system with a specific data model and its model-based data language. The conventional practice for the database-system design and implementation mandates that a database system must be restricted tot a single data model and a specific model-based data language. This paper introduces a new and unconventional approach to the design and implementation of a database system, the multi-lingual database system (MLDS). The multi-lingual database system is a single database system that can execute many transactions written respectively in different data languages and support many databases structured correspondingly in various data models. For example, this multi-lingual database system can run DL/I transactions on IMS databases, CODASYL-DML transactions on network database, SQL transactions on relational databases and Daplex transactions on entity-relationship databases, where the system appears to the user like a heterogeneous collection of database systems. Thus, a multi-lingual database system allows the old transactions and existing databases to be migrated to the new environment, the experienced user to continue to utilize certain favorite features of existing data languages and data models, the new user to explore the strong features of the various data languages and data models, the hardware upgrade to be focused on a single system instead of a heterogeneous collection of database systems, and the database application to cover wider types of transactions and different modes of interactionssupported in part by the Foundation Research Program of the Naval Postgraduate School with funds provided by the Chief of Naval Researchhttp://archive.org/details/multilingualdata00demuN0001486WR4E001NAApproved for public release; distribution is unlimited

    Natural language processing

    Get PDF
    Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems

    Knowledge Management and Cultural Heritage Repositories. Cross-Lingual Information Retrieval Strategies

    Get PDF
    In the last years important initiatives, like the development of the European Library and Europeana, aim to increase the availability of cultural content from various types of providers and institutions. The accessibility to these resources requires the development of environments which allow both to manage multilingual complexity and to preserve the semantic interoperability. The creation of Natural Language Processing (NLP) applications is finalized to the achievement of CrossLingual Information Retrieval (CLIR). This paper presents an ongoing research on language processing based on the LexiconGrammar (LG) approach with the goal of improving knowledge management in the Cultural Heritage repositories. The proposed framework aims to guarantee interoperability between multilingual systems in order to overcome crucial issues like cross-language and cross-collection retrieval. Indeed, the LG methodology tries to overcome the shortcomings of statistical approaches as in Google Translate or Bing by Microsoft concerning Multi-Word Unit (MWU) processing in queries, where the lack of linguistic context represents a serious obstacle to disambiguation. In particular, translations concerning specific domains, as it is has been widely recognized, is unambiguous since the meanings of terms are mono-referential and the type of relation that links a given term to its equivalent in a foreign language is biunivocal, i.e. a one-to-one coupling which causes this relation to be exclusive and reversible. Ontologies are used in CLIR and are considered by several scholars a promising research area to improve the effectiveness of Information Extraction (IE) techniques particularly for technical-domain queries. Therefore, we present a methodological framework which allows to map both the data and the metadata among the language-specific ont

    Markup meets middleware

    Get PDF
    We describe a distributed system architecture that supports the integration of different front-office trading systems with middle and back-office systems, each of which have been procured from different vendors. The architecture uses a judicious combination of object-oriented middleware and markup languages. In this combination an object request broker implements reliable trade data transport. Markup languages, particularly XML, are used to address data integration problems. We show that the strengths of middleware and markup languages are complementary and discuss the benefits of deploying middleware and markup languages in a synergistic manner

    PowerAqua: fishing the semantic web

    Get PDF
    The Semantic Web (SW) offers an opportunity to develop novel, sophisticated forms of question answering (QA). Specifically, the availability of distributed semantic markup on a large scale opens the way to QA systems which can make use of such semantic information to provide precise, formally derived answers to questions. At the same time the distributed, heterogeneous, large-scale nature of the semantic information introduces significant challenges. In this paper we describe the design of a QA system, PowerAqua, designed to exploit semantic markup on the web to provide answers to questions posed in natural language. PowerAqua does not assume that the user has any prior information about the semantic resources. The system takes as input a natural language query, translates it into a set of logical queries, which are then answered by consulting and aggregating information derived from multiple heterogeneous semantic sources
    corecore