5 research outputs found

    A Semantic Web pragmatic approach to develop Clinical ontologies, and thus Semantic Interoperability, based in HL7 v2.xml messaging

    No full text
    The ISO/HL7 27931:2009 standard intends to establish a global interoperability framework for Healthcare applications. However, being a messaging related protocol, it lacks a semantic foundation for interoperability at a machine treatable level has intended through the Semantic Web. There is no alignment between the HL7 V2.xml message payloads and a meaning service like a suitable ontology. Careful application of Semantic Web tools and concepts can ease extremely the path to the fundamental concept of Shared Semantics. In this paper the Semantic Web and Artificial Intelligence tools and techniques that allow aligned ontology population are presented and their applicability discussed. We present the coverage of HL7 RIM inadequacy for ontology mapping and how to circumvent it, NLP techniques for semi-automated ontology population and discuss the current trends about knowledge representation and reasoning that concur to the proposed achievement

    Predicting Reasoner Performance on ABox Intensive OWL 2 EL Ontologies

    No full text
    In this article, the authors introduce the notion of ABox intensity in the context of predicting reasoner performance to improve the representativeness of ontology metrics, and they develop new metrics that focus on ABox features of OWL 2 EL ontologies. Their experiments show that taking into account the intensity through the proposed metrics contributes to overall prediction accuracy for ABox intensive ontologies

    Spatio-Temporal Analysis for Human Action Detection and Recognition in Uncontrolled Environments

    No full text
    Understanding semantic meaning of human actions captured in unconstrained environments has broad applications in fields ranging from patient monitoring, human-computer interaction, to surveillance systems. However, while great progresses have been achieved on automatic human action detection and recognition in videos that are captured in controlled/constrained environments, most existing approaches perform unsatisfactorily on videos with uncontrolled/unconstrained conditions (e.g., significant camera motion, background clutter, scaling, and light conditions). To address this issue, the authors propose a robust human action detection and recognition framework that works effectively on videos taken in controlled or uncontrolled environments. Specifically, the authors integrate the optical flow field and Harris3D corner detector to generate a new spatial-temporal information representation for each video sequence, from which the general Gaussian mixture model (GMM) is learned. All the mean vectors of the Gaussian components in the generated GMM model are concatenated to create the GMM supervector for video action recognition. They build a boosting classifier based on a set of sparse representation classifiers and hamming distance classifiers to improve the accuracy of action recognition. The experimental results on two broadly used public data sets, KTH and UCF YouTube Action, show that the proposed framework outperforms the other state-of-the-art approaches on both action detection and recognition
    corecore