21,630 research outputs found

    A formal foundation for ontology alignment interaction models

    No full text
    Ontology alignment foundations are hard to find in the literature. The abstract nature of the topic and the diverse means of practice makes it difficult to capture it in a universal formal foundation. We argue that such a lack of formality hinders further development and convergence of practices, and in particular, prevents us from achieving greater levels of automation. In this article we present a formal foundation for ontology alignment that is based on interaction models between heterogeneous agents on the Semantic Web. We use the mathematical notion of information flow in a distributed system to ground our three hypotheses of enabling semantic interoperability and we use a motivating example throughout the article: how to progressively align two ontologies of research quality assessment through meaning coordination. We conclude the article with the presentation---in an executable specification language---of such an ontology-alignment interaction model

    Communication and Synchronization of Distributed Medical Models: Design, Development, and Performance Analysis

    Full text link
    Model-based development is a widely-used method to describe complex systems that enables the rapid prototyping. Advances in the science of distributed systems has led to the development of large scale statechart models which are distributed among multiple locations. Taking medicine for example, models of best-practice guidelines during rural ambulance transport are distributed across hospital settings from a rural hospital, to an ambulance, to a central tertiary hospital. Unfortunately, these medical models require continuous and real-time communication across individual medical models in physically distributed treatment locations which provides vital assistance to the clinicians and physicians. This makes it necessary to offer methods for model-driven communication and synchronization in a distributed environment. In this paper, we describe ModelSink, a middleware to address the problem of communication and synchronization of heterogeneous distributed models. Being motivated by the synchronization requirements during emergency ambulance transport, we use medical best-practice models as a case study to illustrate the notion of distributed models. Through ModelSink, we achieve an efficient communication architecture, open-loop-safe protocol, and queuing and mapping mechanisms compliant with the semantics of statechart-based model-driven development. We evaluated the performance of ModelSink on distributed sets of medical models that we have developed to assess how ModelSink performs in various loads. Our work is intended to assist clinicians, EMT, and medical staff to prevent unintended deviations from medical best practices, and overcome connectivity and coordination challenges that exist in a distributed hospital network. Our practice suggests that there are in fact additional potential domains beyond medicine where our middleware can provide needed utility.Comment: 12 pages, IEEE Journal of Translational Engineering in Health and Medicine, 201

    Practical Semantic Parsing for Spoken Language Understanding

    Full text link
    Executable semantic parsing is the task of converting natural language utterances into logical forms that can be directly used as queries to get a response. We build a transfer learning framework for executable semantic parsing. We show that the framework is effective for Question Answering (Q&A) as well as for Spoken Language Understanding (SLU). We further investigate the case where a parser on a new domain can be learned by exploiting data on other domains, either via multi-task learning between the target domain and an auxiliary domain or via pre-training on the auxiliary domain and fine-tuning on the target domain. With either flavor of transfer learning, we are able to improve performance on most domains; we experiment with public data sets such as Overnight and NLmaps as well as with commercial SLU data. The experiments carried out on data sets that are different in nature show how executable semantic parsing can unify different areas of NLP such as Q&A and SLU

    Applied Metamodelling: A Foundation for Language Driven Development (Third Edition)

    Full text link
    Modern day system developers have some serious problems to contend with. The systems they develop are becoming increasingly complex as customers demand richer functionality delivered in ever shorter timescales. They have to manage a huge diversity of implementation technologies, design techniques and development processes: everything from scripting languages to web-services to the latest 'silver bullet' design abstraction. To add to that, nothing stays still: today's 'must have' technology rapidly becomes tomorrow's legacy problem that must be managed along with everything else. How can these problems be dealt with? In this book we propose that there is a common foundation to their resolution: languages. Languages are the primary way in which system developers communicate, design and implement systems. Languages provide abstractions that can encapsulate complexity, embrace the diversity of technologies and design abstractions, and unite modern and legacy systems

    To Monitor Or Not: Observing Robot's Behavior based on a Game-Theoretic Model of Trust

    Full text link
    In scenarios where a robot generates and executes a plan, there may be instances where this generated plan is less costly for the robot to execute but incomprehensible to the human. When the human acts as a supervisor and is held accountable for the robot's plan, the human may be at a higher risk if the incomprehensible behavior is deemed to be infeasible or unsafe. In such cases, the robot, who may be unaware of the human's exact expectations, may choose to execute (1) the most constrained plan (i.e. one preferred by all possible supervisors) incurring the added cost of executing highly sub-optimal behavior when the human is monitoring it and (2) deviate to a more optimal plan when the human looks away. While robots do not have human-like ulterior motives (such as being lazy), such behavior may occur because the robot has to cater to the needs of different human supervisors. In such settings, the robot, being a rational agent, should take any chance it gets to deviate to a lower cost plan. On the other hand, continuous monitoring of the robot's behavior is often difficult for humans because it costs them valuable resources (e.g., time, cognitive overload, etc.). Thus, to optimize the cost for monitoring while ensuring the robots follow the safe behavior, we model this problem in the game-theoretic framework of trust. In settings where the human does not initially trust the robot, pure-strategy Nash Equilibrium provides a useful policy for the human.Comment: First two authors contributed equally and names are ordered based on a coin fli

    Reconciliation of object interaction models

    Get PDF
    This paper presents Reconciliation+, a tool-supported method which identifies overlaps between models of different object interactions expressed as UML sequence and/or collaboration diagrams, checks whether the overlapping elements of these models satisfy specific consistency rules, and guides developers in handling these inconsistencies. The method also keeps track of the decisions made and the actions taken in the process of managing inconsistencies

    Recipes for Translating Big Data Machine Reading to Executable Cellular Signaling Models

    Full text link
    With the tremendous increase in the amount of biological literature, developing automated methods for extracting big data from papers, building models and explaining big mechanisms becomes a necessity. We describe here our approach to translating machine reading outputs, obtained by reading bio- logical signaling literature, to discrete models of cellular networks. We use out- puts from three different reading engines, and describe our approach to translating their different features, using examples from reading cancer literature. We also outline several issues that still arise when assembling cellular network models from state-of-the-art reading engines. Finally, we illustrate the details of our approach with a case study in pancreatic cancer

    2D implementation of quantum annealing algorisms for fourth order binary optimization problems

    Full text link
    Quantum annealing may provide advantages over simulated annealing on solving some problems such as Kth order binary optimization problem. No feasible architecture exists to implement the high-order optimization problem (K > 2) on current quantum annealing hardware. We propose a two-dimensional quantum annealing architecture to solve the 4th order binary optimization problem by encoding four-qubit interactions within the coupled local fields acting on a set of physical qubits. All possible four-body coupling terms for an N-qubit system can be implemented through this architecture and are readily realizable with the existing superconducting circuit technologies. The overhead of the physical qubits is O(N4), which is the same as previously proposed architectures in four-dimensional space. The equivalence between the optimization problem Hamiltonian and the executable Hamiltonian is ensured by a gauge invariant subspace of the experimental system. A scheme to realize local gauge constraint by single ancillary qubit is proposed.Comment: 16 pages, 6 figure

    A Logic of Knowing How

    Full text link
    In this paper, we propose a single-agent modal logic framework for reasoning about goal-direct "knowing how" based on ideas from linguistics, philosophy, modal logic and automated planning. We first define a modal language to express "I know how to guarantee phi given psi" with a semantics not based on standard epistemic models but labelled transition systems that represent the agent's knowledge of his own abilities. A sound and complete proof system is given to capture the valid reasoning patterns about "knowing how" where the most important axiom suggests its compositional nature.Comment: 14 pages, a 12-page version accepted by LORI

    Verifying Web Applications: From Business Level Specifications to Automated Model-Based Testing

    Full text link
    One of reasons preventing a wider uptake of model-based testing in the industry is the difficulty which is encountered by developers when trying to think in terms of properties rather than linear specifications. A disparity has traditionally been perceived between the language spoken by customers who specify the system and the language required to construct models of that system. The dynamic nature of the specifications for commercial systems further aggravates this problem in that models would need to be rechecked after every specification change. In this paper, we propose an approach for converting specifications written in the commonly-used quasi-natural language Gherkin into models for use with a model-based testing tool. We have instantiated this approach using QuickCheck and demonstrate its applicability via a case study on the eHealth system, the national health portal for Maltese residents.Comment: In Proceedings MBT 2014, arXiv:1403.704
    • …
    corecore