2,871 research outputs found

    RuleCNL: A Controlled Natural Language for Business Rule Specifications

    Full text link
    Business rules represent the primary means by which companies define their business, perform their actions in order to reach their objectives. Thus, they need to be expressed unambiguously to avoid inconsistencies between business stakeholders and formally in order to be machine-processed. A promising solution is the use of a controlled natural language (CNL) which is a good mediator between natural and formal languages. This paper presents RuleCNL, which is a CNL for defining business rules. Its core feature is the alignment of the business rule definition with the business vocabulary which ensures traceability and consistency with the business domain. The RuleCNL tool provides editors that assist end-users in the writing process and automatic mappings into the Semantics of Business Vocabulary and Business Rules (SBVR) standard. SBVR is grounded in first order logic and includes constructs called semantic formulations that structure the meaning of rules.Comment: 12 pages, 7 figures, Fourth Workshop on Controlled Natural Language (CNL 2014) Proceeding

    Application of Semantics to Solve Problems in Life Sciences

    Get PDF
    Fecha de lectura de Tesis: 10 de diciembre de 2018La cantidad de información que se genera en la Web se ha incrementado en los últimos años. La mayor parte de esta información se encuentra accesible en texto, siendo el ser humano el principal usuario de la Web. Sin embargo, a pesar de todos los avances producidos en el área del procesamiento del lenguaje natural, los ordenadores tienen problemas para procesar esta información textual. En este cotexto, existen dominios de aplicación en los que se están publicando grandes cantidades de información disponible como datos estructurados como en el área de las Ciencias de la Vida. El análisis de estos datos es de vital importancia no sólo para el avance de la ciencia, sino para producir avances en el ámbito de la salud. Sin embargo, estos datos están localizados en diferentes repositorios y almacenados en diferentes formatos que hacen difícil su integración. En este contexto, el paradigma de los Datos Vinculados como una tecnología que incluye la aplicación de algunos estándares propuestos por la comunidad W3C tales como HTTP URIs, los estándares RDF y OWL. Haciendo uso de esta tecnología, se ha desarrollado esta tesis doctoral basada en cubrir los siguientes objetivos principales: 1) promover el uso de los datos vinculados por parte de la comunidad de usuarios del ámbito de las Ciencias de la Vida 2) facilitar el diseño de consultas SPARQL mediante el descubrimiento del modelo subyacente en los repositorios RDF 3) crear un entorno colaborativo que facilite el consumo de Datos Vinculados por usuarios finales, 4) desarrollar un algoritmo que, de forma automática, permita descubrir el modelo semántico en OWL de un repositorio RDF, 5) desarrollar una representación en OWL de ICD-10-CM llamada Dione que ofrezca una metodología automática para la clasificación de enfermedades de pacientes y su posterior validación haciendo uso de un razonador OWL

    Inductive learning of the surgical workflow model through video annotations

    Get PDF
    partially_open5siSurgical workflow modeling is becoming increasingly useful to train surgical residents for complex surgical procedures. Rule-based surgical workflows have shown to be useful to create context-aware systems. However, manually constructing production rules is a time-intensive and laborious task. With the expansion of new technologies, large video archive can be created and annotated exploiting and storing the expert’s knowledge. This paper presents a prototypical study of automatic generation of production rules, in the Horn-clause, using the First Order Inductive Learner (FOIL) algorithm applied to annotated surgical videos of Thoracentesis procedure and of its feasibility to use in context-aware system framework. The algorithm was able to learn 18 rules for surgical workflow model with 0.88 precision, and 0.94 F1 score on the standard video annotation data, representing entities of the surgical workflow, which was used to retrieve contextual information on Thoracentesis workflow for its application to surgical training.openNakawala, HIRENKUMAR CHANDRAKANT; DE MOMI, Elena; Pescatori, Erica Laura; Morelli, Anna; Ferrigno, GiancarloNakawala, HIRENKUMAR CHANDRAKANT; DE MOMI, Elena; Pescatori, Erica Laura; Morelli, Anna; Ferrigno, Giancarl

    A formal foundation for ontology alignment interaction models

    No full text
    Ontology alignment foundations are hard to find in the literature. The abstract nature of the topic and the diverse means of practice makes it difficult to capture it in a universal formal foundation. We argue that such a lack of formality hinders further development and convergence of practices, and in particular, prevents us from achieving greater levels of automation. In this article we present a formal foundation for ontology alignment that is based on interaction models between heterogeneous agents on the Semantic Web. We use the mathematical notion of information flow in a distributed system to ground our three hypotheses of enabling semantic interoperability and we use a motivating example throughout the article: how to progressively align two ontologies of research quality assessment through meaning coordination. We conclude the article with the presentation---in an executable specification language---of such an ontology-alignment interaction model

    Development of an ontology for aerospace engine components degradation in service

    Get PDF
    This paper presents the development of an ontology for component service degradation. In this paper, degradation mechanisms in gas turbine metallic components are used for a case study to explain how a taxonomy within an ontology can be validated. The validation method used in this paper uses an iterative process and sanity checks. Data extracted from on-demand textual information are filtered and grouped into classes of degradation mechanisms. Various concepts are systematically and hierarchically arranged for use in the service maintenance ontology. The allocation of the mechanisms to the AS-IS ontology presents a robust data collection hub. Data integrity is guaranteed when the TO-BE ontology is introduced to analyse processes relative to various failure events. The initial evaluation reveals improvement in the performance of the TO-BE domain ontology based on iterations and updates with recognised mechanisms. The information extracted and collected is required to improve service k nowledge and performance feedback which are important for service engineers. Existing research areas such as natural language processing, knowledge management, and information extraction were also examined

    Learning from Ordinal Data with Inductive Logic Programming in Description Logic

    Get PDF
    Here we describe a Description Logic (DL) based Inductive Logic Programming (ILP) algorithm for learning relations of order. We test our algorithm on the task of learning user preferences from pairwise comparisons. The results have implications for the development of customised recommender systems for e-commerce, and more broadly, wherever DL-based representations of knowledge, such as OWL ontologies, are used. The use of DL makes for easy integration with such data, and produces hypotheses that are easy to interpret by novice users. The proposed algorithm outperforms SVM, Decision Trees and Aleph on data from two domains

    A Semantic Information Management Approach for Improving Bridge Maintenance based on Advanced Constraint Management

    Get PDF
    Bridge rehabilitation projects are important for transportation infrastructures. This research proposes a novel information management approach based on state-of-the-art deep learning models and ontologies. The approach can automatically extract, integrate, complete, and search for project knowledge buried in unstructured text documents. The approach on the one hand facilitates implementation of modern management approaches, i.e., advanced working packaging to delivery success bridge rehabilitation projects, on the other hand improves information management practices in the construction industry

    Formal description and automatic generation of learning spaces based on ontologies

    Get PDF
    Tese de Doutoramento em InformaticsA good Learning Space (LS) should convey pertinent information to the visitors at the most adequate time and location to favor their knowledge acquisition. This statement justifies the relevance of virtual Learning Spaces. Considering the consolidation of the Internet and the improvement of the interaction, searching, and learning mechanisms, this work proposes a generic architecture, called CaVa, to create Virtual Learning Spaces building upon cultural institution documents. More precisely, the proposal is to automatically generate ontology-based virtual learning environments from document repositories. Thus, to impart relevant learning materials to the virtual LS, this proposal is based on using ontologies to represent the fundamental concepts and semantic relations in a user- and machine-understandable format. These concepts together with the data (extracted from the real documents) stored in a digital repository are displayed in a web-based LS that enables the visitors to use the available features and tools to learn about a specific domain. According to the approach here discussed, each desired virtual LS must be specified rigorously through a Domain-Specific Language (DSL), called CaVaDSL, designed and implemented in this work. Furthermore, a set of processors (generators) was developed. These generators have the duty, receiving a CaVaDSL specification as input, of transforming it into several web scripts to be recognized and rendered by a web browser, producing the final virtual LS. Aiming at validating the proposed architecture, three real case studies – (1) Emigration Documents belonging to Fafe’s Archive; (2) The prosopographical repository of the Fasti Ecclesiae Portugaliae project; and (3) Collection of life stories of the Museum of the Person – were used. These real scenarios are actually relevant as they promote the digital preservation and dissemination of Cultural Heritage, contributing to human welfare.Um bom Espaço de Aprendizagem (LS – Learning Space) deve transmitir informações pertinentes aos visitantes no horário e local mais adequados para favorecer a aquisição de conhecimento. Esta afirmação justifica a relevância dos Espaços virtuais de Aprendizagem. Considerando a consolidação da Internet e o aprimoramento dos mecanismos de interação, busca e aprendizagem, este trabalho propõe uma arquitetura genérica, denominada CaVa, para a criação de Espaços virtuais de Aprendizagem baseados em documentos de instituições culturais. Mais precisamente, a proposta é gerar automaticamente ambientes de aprendizagem virtual baseados em ontologias a partir de repositórios de documentos. Assim, para transmitir materiais de aprendizagem relevantes para o LS virtual, esta proposta é baseada no uso de ontologias para representar os conceitos fundamentais e as relações semânticas em um formato compreensível pelo usuário e pela máquina. Esses conceitos, juntamente com os dados (extraídos dos documentos reais) armazenados em um repositório digital, são exibidos em um LS baseado na web que permite aos visitantes usarem os recursos e ferramentas disponíveis para aprenderem sobre um domínio espec ífico. Cada LS virtual desejado deve ser especificado rigorosamente por meio de uma Linguagem de Domínio Específico (DSL), chamada CaVaDSL, projetada e implementada neste trabalho. Além disso, um conjunto de processadores (geradores) foi desenvolvido. Esses geradores têm o dever de receber uma especificação CaVaDSL como entrada e transformá-la em diversos web scripts para serem reconhecidos e renderizados por um navegador, produzindo o LS virtual final. Visando validar a arquitetura proposta, três estudos de caso reais foram usados. Esses cenários reais são realmente relevantes, pois promovem a preservação digital e a disseminação do Património Cultural, contribuindo para o bem-estar humano

    Hierarchical Multi-Label Classification Using Web Reasoning for Large Datasets

    Get PDF
    Extracting valuable data among large volumes of data is one of the main challenges in Big Data. In this paper, a Hierarchical Multi-Label Classification process called Semantic HMC is presented. This process aims to extract valuable data from very large data sources, by automatically learning a label hierarchy and classifying data items.The Semantic HMC process is composed of five scalable steps, namely Indexation, Vectorization, Hierarchization, Resolution and Realization. The first three steps construct automatically a label hierarchy from statistical analysis of data. This paper focuses on the last two steps which perform item classification according to the label hierarchy. The process is implemented as a scalable and distributed application, and deployed on a Big Data platform. A quality evaluation is described, which compares the approach with multi-label classification algorithms from the state of the art dedicated to the same goal. The Semantic HMC approach outperforms state of the art approaches in some areas
    • …
    corecore