11 research outputs found

    SWI-Prolog and the Web

    Get PDF
    Where Prolog is commonly seen as a component in a Web application that is either embedded or communicates using a proprietary protocol, we propose an architecture where Prolog communicates to other components in a Web application using the standard HTTP protocol. By avoiding embedding in external Web servers development and deployment become much easier. To support this architecture, in addition to the transfer protocol, we must also support parsing, representing and generating the key Web document types such as HTML, XML and RDF. This paper motivates the design decisions in the libraries and extensions to Prolog for handling Web documents and protocols. The design has been guided by the requirement to handle large documents efficiently. The described libraries support a wide range of Web applications ranging from HTML and XML documents to Semantic Web RDF processing. To appear in Theory and Practice of Logic Programming (TPLP)Comment: 31 pages, 24 figures and 2 tables. To appear in Theory and Practice of Logic Programming (TPLP

    Missing lateral relationships in top‑level concepts of an ontology

    Full text link
    Background: Ontologies house various kinds of domain knowledge in formal structures, primarily in the form of concepts and the associative relationships between them. Ontologies have become integral components of many health information processing environments. Hence, quality assurance of the conceptual content of any ontology is critical. Relationships are foundational to the definition of concepts. Missing relationship errors (i.e., unintended omissions of important definitional relationships) can have a deleterious effect on the quality of an ontology. An abstraction network is a structure that overlays an ontology and provides an alternate, summarization view of its contents. One kind of abstraction network is called an area taxonomy, and a variation of it is called a subtaxonomy. A methodology based on these taxonomies for more readily finding missing relationship errors is explored. Methods: The area taxonomy and the subtaxonomy are deployed to help reveal concepts that have a high likelihood of exhibiting missing relationship errors. A specific top-level grouping unit found within the area taxonomy and subtaxonomy, when deemed to be anomalous, is used as an indicator that missing relationship errors are likely to be found among certain concepts. Two hypotheses pertaining to the effectiveness of our Quality Assurance approach are studied. Results: Our Quality Assurance methodology was applied to the Biological Process hierarchy of the National Cancer Institute thesaurus (NCIt) and SNOMED CT’s Eye/vision finding subhierarchy within its Clinical finding hierarchy. Many missing relationship errors were discovered and confirmed in our analysis. For both test-bed hierarchies, our Quality Assurance methodology yielded a statistically significantly higher number of concepts with missing relationship errors in comparison to a control sample of concepts. Two hypotheses are confirmed by these findings. Conclusions: Quality assurance is a critical part of an ontology’s lifecycle, and automated or semi-automated tools for supporting this process are invaluable. We introduced a Quality Assurance methodology targeted at missing relationship errors. Its successful application to the NCIt’s Biological Process hierarchy and SNOMED CT’s Eye/vision finding subhierarchy indicates that it can be a useful addition to the arsenal of tools available to ontology maintenance personnel

    Structural auditing methodologies for controlled terminologies

    Get PDF
    Several auditing methodologies for large controlled terminologies are developed. These are applied to the Unified Medical Language System XXXX and the National Cancer Institute Thesaurus (NCIT). Structural auditing methodologies are based on the structural aspects such as IS-A hierarchy relationships groups of concepts assigned to semantic types and groups of relationships defined for concepts. Structurally uniform groups of concepts tend to be semantically uniform. Structural auditing methodologies focus on concepts with unlikely or rare configuration. These concepts have a high likelihood for errors. One of the methodologies is based on comparing hierarchical relationships between the META and SN, two major knowledge sources of the UMLS. In general, a correspondence between them is expected since the SN hierarchical relationships should abstract the META hierarchical relationships. It may indicate an error when a mismatch occurs. The UMLS SN has 135 categories called semantic types. However, in spite of its medium size, the SN has limited use for comprehension purposes because it cannot be easily represented in a pictorial form, it has many (about 7,000) relationships. Therefore, a higher-level abstraction for the SN called a metaschema, is constructed. Its nodes are meta-semantic types, each representing a connected group of semantic types of the SN. One of the auditing methodologies is based on a kind of metaschema called a cohesive metaschema. The focus is placed on concepts of intersections of meta-semantic types. As is shown, such concepts have high likelihood for errors. Another auditing methodology is based on dividing the NCIT into areas according to the roles of its concepts. Moreover, each multi-rooted area is further divided into pareas that are singly rooted. Each p-area contains a group of structurally and semantically uniform concepts. These groups, as well as two derived abstraction networks called taxonomies, help in focusing on concepts with potential errors. With genomic research being at the forefront of bioscience, this auditing methodology is applied to the Gene hierarchy as well as the Biological Process hierarchy of the NCIT, since processes are very important for gene information. The results support the hypothesis that the occurrence of errors is related to the size of p-areas. Errors are more frequent for small p-areas

    Adding Threshold Concepts to the Description Logic EL

    Get PDF
    We introduce a family of logics extending the lightweight Description Logic EL, that allows us to define concepts in an approximate way. The main idea is to use a graded membership function m, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C~t for ~ in {,>=} then collect all the individuals that belong to C with degree ~t. We further study this framework in two particular directions. First, we define a specific graded membership function deg and investigate the complexity of reasoning in the resulting Description Logic tEL(deg) w.r.t. both the empty terminology and acyclic TBoxes. Second, we show how to turn concept similarity measures into membership degree functions. It turns out that under certain conditions such functions are well-defined, and therefore induce a wide range of threshold logics. Last, we present preliminary results on the computational complexity landscape of reasoning in such a big family of threshold logics

    Dokumentverifikation mit Temporaler Beschreibungslogik

    Get PDF
    The thesis proposes a new formal framework for checking the content of web documents along individual reading paths. It is vital for the readability of web documents that their content is consistent and coherent along the possible browsing paths through the document. Manually ensuring the coherence of content along the possibly huge number of different browsing paths in a web document is time-consuming and error-prone. Existing methods for document validation and verification are not sufficiently expressive and efficient. The innovative core idea of this thesis is to combine the temporal logic CTL and description logic ALC for the representation of consistency criteria. The resulting new temporal description logics ALCCTL can - in contrast to existing specification formalisms - compactly represent coherence criteria on documents. Verification of web documents is modelled as a model checking problem of ALCCTL. The decidability and polynomial complexity of the ALCCTL model checking problem is proven and a sound, complete, and optimal model checking algorithm is presented. Case studies on real and realistic web documents demonstrate the performance and adequacy of the proposed methods. Existing methods such as symbolic model checking or XML-based document validation are outperformed in both expressiveness and speed.Die Dissertation stellt ein neues formales Framework für die automatische Prüfung inhaltlich-struktureller Konsistenzkriterien an Web-Dokumente vor. Viele Informationen werden heute in Form von Web-Dokumenten zugänglich gemacht. Komplexe Dokumente wie Lerndokumente oder technische Dokumentationen müssen dabei vielfältige Qualitätskriterien erfüllen. Der Informationsgehalt des Dokuments muss aktuell, vollständig und in sich stimmig sein. Die Präsentationsstruktur muss unterschiedlichen Zielgruppen mit unterschiedlichen Informationsbedürfnissen genügen. Die Sicherstellung grundlegender Konsistenzeigenschaften von Dokumenten ist angesichts der Vielzahl der Anforderungen und Nutzungskontexte eines elektronischen Dokuments nicht trivial. In dieser Arbeit werden aus der Hard-/Softwareverifikation bekannte Model-Checking-Verfahren mit Methoden zur Repräsentation von Ontologien kombiniert, um sowohl die Struktur des Dokuments als auch inhaltliche Zusammenhänge bei der Prüfung von Konsistenzkriterien berücksichtigen zu können. Als Spezifikationssprache für Konsistenzkriterien wird die neue temporale Beschreibungslogik ALCCTL vorgeschlagen. Grundlegende Eigenschaften wie Entscheidbarkeit, Ausdruckskraft und Komplexität werden untersucht. Die Adäquatheit und Praxistauglichkeit des Ansatzes werden in Fallstudien mit eLearning-Dokumenten evaluiert. Die Ergebnisse übertreffen bekannte Ansätze wie symbolisches Model-Checking oder Methoden zur Validierung von XML-Dokumenten in Performanz, Ausdruckskraft hinsichtlich der prüfbaren Kriterien und Flexibilität hinsichtlich des Dokumenttyps und -formats

    Saturation-based decision procedures for extensions of the guarded fragment

    Get PDF
    We apply the framework of Bachmair and Ganzinger for saturation-based theorem proving to derive a range of decision procedures for logical formalisms, starting with a simple terminological language EL, which allows for conjunction and existential restrictions only, and ending with extensions of the guarded fragment with equality, constants, functionality, number restrictions and compositional axioms of form S ◦ T ⊆ H. Our procedures are derived in a uniform way using standard saturation-based calculi enhanced with simplification rules based on the general notion of redundancy. We argue that such decision procedures can be applied for reasoning in expressive description logics, where they have certain advantages over traditionally used tableau procedures, such as optimal worst-case complexity and direct correctness proofs.Wir wenden das Framework von Bachmair und Ganzinger für saturierungsbasiertes Theorembeweisen an, um eine Reihe von Entscheidungsverfahren für logische Formalismen abzuleiten, angefangen von einer simplen terminologischen Sprache EL, die nur Konjunktionen und existentielle Restriktionen erlaubt, bis zu Erweiterungen des Guarded Fragment mit Gleichheit, Konstanten, Funktionalität, Zahlenrestriktionen und Kompositionsaxiomen der Form S ◦ T ⊆ H. Unsere Verfahren sind einheitlich abgeleitet unter Benutzung herkömmlicher saturierungsbasierter Kalküle, verbessert durch Simplifikationsregeln, die auf dem Konzept der Redundanz basieren. Wir argumentieren, daß solche Entscheidungsprozeduren für das Beweisen in ausdrucksvollen Beschreibungslogiken angewendet werden können, wo sie gewisse Vorteile gegenüber traditionell benutzten Tableauverfahren besitzen, wie z.B. optimale worst-case Komplexität und direkte Korrektheitsbeweise

    Calibración de un algoritmo de detección de anomalías marítimas basado en la fusión de datos satelitales

    Get PDF
    La fusión de diferentes fuentes de datos aporta una ayuda significativa en el proceso de toma de decisiones. El presente artículo describe el desarrollo de una plataforma que permite detectar anomalías marítimas por medio de la fusión de datos del Sistema de Información Automática (AIS) para seguimiento de buques y de imágenes satelitales de Radares de Apertura Sintética (SAR). Estas anomalías son presentadas al operador como un conjunto de detecciones que requieren ser monitoreadas para descubrir su naturaleza. El proceso de detección se lleva adelante primero identificando objetos dentro de las imágenes SAR a través de la aplicación de algoritmos CFAR, y luego correlacionando los objetos detectados con los datos reportados mediante el sistema AIS. En este trabajo reportamos las pruebas realizadas con diferentes configuraciones de los parámetros para los algoritmos de detección y asociación, analizamos la respuesta de la plataforma y reportamos la combinación de parámetros que reporta mejores resultados para las imágenes utilizadas. Este es un primer paso en nuestro objetivo futuro de desarrollar un sistema que ajuste los parámetros en forma dinámica dependiendo de las imágenes disponibles.XVI Workshop Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    WICC 2016 : XVIII Workshop de Investigadores en Ciencias de la Computación

    Get PDF
    Actas del XVIII Workshop de Investigadores en Ciencias de la Computación (WICC 2016), realizado en la Universidad Nacional de Entre Ríos, el 14 y 15 de abril de 2016.Red de Universidades con Carreras en Informática (RedUNCI

    WICC 2017 : XIX Workshop de Investigadores en Ciencias de la Computación

    Get PDF
    Actas del XIX Workshop de Investigadores en Ciencias de la Computación (WICC 2017), realizado en el Instituto Tecnológico de Buenos Aires (ITBA), el 27 y 28 de abril de 2017.Red de Universidades con Carreras en Informática (RedUNCI

    XX Workshop de Investigadores en Ciencias de la Computación - WICC 2018 : Libro de actas

    Get PDF
    Actas del XX Workshop de Investigadores en Ciencias de la Computación (WICC 2018), realizado en Facultad de Ciencias Exactas y Naturales y Agrimensura de la Universidad Nacional del Nordeste, los dìas 26 y 27 de abril de 2018.Red de Universidades con Carreras en Informática (RedUNCI
    corecore