161 research outputs found

    Automatic Generation of Personalized Recommendations in eCoaching

    Get PDF
    Denne avhandlingen omhandler eCoaching for personlig livsstilsstÞtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er Ä designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede lÞsningen er fokusert pÄ forbedring av fysisk aktivitet. Prototypen bruker bÊrbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for Ä utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen pÄ teknologisk verifisering snarere enn klinisk evaluering.publishedVersio

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Ontologies Applied in Clinical Decision Support System Rules:Systematic Review

    Get PDF
    BackgroundClinical decision support systems (CDSSs) are important for the quality and safety of health care delivery. Although CDSS rules guide CDSS behavior, they are not routinely shared and reused. ObjectiveOntologies have the potential to promote the reuse of CDSS rules. Therefore, we systematically screened the literature to elaborate on the current status of ontologies applied in CDSS rules, such as rule management, which uses captured CDSS rule usage data and user feedback data to tailor CDSS services to be more accurate, and maintenance, which updates CDSS rules. Through this systematic literature review, we aim to identify the frontiers of ontologies used in CDSS rules. MethodsThe literature search was focused on the intersection of ontologies; clinical decision support; and rules in PubMed, the Association for Computing Machinery (ACM) Digital Library, and the Nursing & Allied Health Database. Grounded theory and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines were followed. One author initiated the screening and literature review, while 2 authors validated the processes and results independently. The inclusion and exclusion criteria were developed and refined iteratively. ResultsCDSSs were primarily used to manage chronic conditions, alerts for medication prescriptions, reminders for immunizations and preventive services, diagnoses, and treatment recommendations among 81 included publications. The CDSS rules were presented in Semantic Web Rule Language, Jess, or Jena formats. Despite the fact that ontologies have been used to provide medical knowledge, CDSS rules, and terminologies, they have not been used in CDSS rule management or to facilitate the reuse of CDSS rules. ConclusionsOntologies have been used to organize and represent medical knowledge, controlled vocabularies, and the content of CDSS rules. So far, there has been little reuse of CDSS rules. More work is needed to improve the reusability and interoperability of CDSS rules. This review identified and described the ontologies that, despite their limitations, enable Semantic Web technologies and their applications in CDSS rules

    NORA: Scalable OWL reasoner based on NoSQL databasesand Apache Spark

    Get PDF
    Reasoning is the process of inferring new knowledge and identifying inconsis-tencies within ontologies. Traditional techniques often prove inadequate whenreasoning over large Knowledge Bases containing millions or billions of facts.This article introduces NORA, a persistent and scalable OWL reasoner built ontop of Apache Spark, designed to address the challenges of reasoning over exten-sive and complex ontologies. NORA exploits the scalability of NoSQL databasesto effectively apply inference rules to Big Data ontologies with large ABoxes. Tofacilitatescalablereasoning,OWLdata,includingclassandpropertyhierarchiesand instances, are materialized in the Apache Cassandra database. Spark pro-grams are then evaluated iteratively, uncovering new implicit knowledge fromthe dataset and leading to enhanced performance and more efficient reasoningover large-scale ontologies. NORA has undergone a thorough evaluation withdifferent benchmarking ontologies of varying sizes to assess the scalability of thedeveloped solution.Funding for open access charge: Universidad de MĂĄlaga / CBUA This work has been partially funded by grant (funded by MCIN/AEI/10.13039/501100011033/) PID2020-112540RB-C41,AETHER-UMA (A smart data holistic approach for context-aware data analytics: semantics and context exploita-tion). Antonio BenĂ­tez-Hidalgo is supported by Grant PRE2018-084280 (Spanish Ministry of Science, Innovation andUniversities)

    Focused categorization power of ontologies: General framework and study on simple existential concept expressions

    Get PDF
    When reusing existing ontologies for publishing a dataset in RDF (or developing a new ontology), preference may be given to those providing extensive subcategorization for important classes (denoted as focus classes). The subcategories may consist not only of named classes but also of compound class expressions. We define the notion of focused categorization power of a given ontology, with respect to a focus class and a concept expression language, as the (estimated) weighted count of the categories that can be built from the ontology’s signature, conform to the language, and are subsumed by the focus class. For the sake of tractable initial experiments we then formulate a restricted concept expression language based on existential restrictions, and heuristically map it to syntactic patterns over ontology axioms (so-called FCE patterns). The characteristics of the chosen concept expression language and associated FCE patterns are investigated using three different empirical sources derived from ontology collections: first, the concept expression pattern frequency in class definitions; second, the occurrence of FCE patterns in the Tbox of ontologies; and last, for class expressions generated from the Tbox of ontologies (through the FCE patterns); their ‘meaningfulness’ was assessed by different groups of users, yielding a ‘quality ordering’ of the concept expression patterns. The complementary analyses are then compared and summarized. To allow for further experimentation, a web-based prototype was also implemented, which covers the whole process of ontology reuse from keyword-based ontology search through the FCP computation to the selection of ontologies and their enrichment with new concepts built from compound expressions

    ICON: an Ontology for Comprehensive Artistic Interpretations

    Get PDF
    In this work, we introduce ICON, an ontology that models artistic interpretations of artworks’ subject matter (i.e. iconographies) and meanings (i.e. symbols, iconological aspects). Developed by conceptualizing authoritative knowledge and notions taken from Panofsky’s levels of interpretation theory, ICON ontology focuses on the granularity of interpretations. It can be used to describe an interpretation of an artwork from the Pre-iconographical, Icongraphical, and Iconological levels. Its main classes have been aligned to ontologies that come from the domains of cultural descriptions (ArCo, CIDOC-CRM, VIR), semiotics (DOLCE), bibliometrics (CITO), and symbolism (Simulation Ontology), to grant a robust schema that can be extendable using additional classes and properties coming from these ontologies. The ontology was evaluated through competency questions that range from simple recognition on a specific level of interpretation to complex scenarios. Data written using this model was compared to state-of-the-art ontologies and schemas to both highlight the current lack of a domain-specific ontology on art interpretation and show how our work fills some of the current gaps. The ontology is openly available and compliant with FAIR principles. With our ontology, we hope to encourage digital art historians working for cultural institutions in making more detailed linked open data about the content of their artefacts, to exploit the full potential of Semantic Web in linking artworks through not only subjects and common metadata, but also specific symbolic interpretations, intrinsic meanings, and the motifs through which their subjects are represented. Additionally, by basing our work on theories made by different art history scholars in the last century, we make sure that their knowledge and studies will not be lost in the transition to the digital, linked open data era

    Efficient Axiomatization of OWL 2 EL Ontologies from Data by means of Formal Concept Analysis: (Extended Version)

    Get PDF
    We present an FCA-based axiomatization method that produces a complete EL TBox (the terminological part of an OWL 2 EL ontology) from a graph dataset in at most exponential time. We describe technical details that allow for efficient implementation as well as variations that dispense with the computation of extremely large axioms, thereby rendering the approach applicable albeit some completeness is lost. Moreover, we evaluate the prototype on real-world datasets.This is an extended version of an article accepted at AAAI 2024

    One archive, many readings. Personal archives as complex networks in the semantic web

    Get PDF
    Personal archives are the archives created by individuals for their own purposes. Among these are the library and documentary collections of writers and scholars. It is only recently that archival literature has begun to focus on this category of archives, emphasising how their heterogeneous nature necessitates the conciliation of different approaches to archival description, and calling for a broader understanding of the principle of provenance, recognising that multiple creators, including subsequent researchers, can contribute to shaping personal archives over time by adding new layers of contexts. Despite these advances in the theoretical debate, current architectures for archival representation remain behind. Finding aids privilege a single point of view and do not allow subsequent users to embed their own, potentially conflicting, readings. Using semantic web technologies this study aims to define a conceptual model for writers' archives based on existing and widely adopted models in the cultural heritage and humanities domains. The model developed can be used to represent different types of documents at various levels of analysis, as well as record content and components. It also enables the representation of complex relationships and the incorporation of additional layers of interpretation into the finding aid, transforming it from a static search tool into a dynamic research platform.  The personal archive and library of Giuseppe Raimondi serves as a case study for the creation of an archival knowledge base using the proposed conceptual model. By querying the knowledge graph through SPARQL, the effectiveness of the model is evaluated. The results demonstrate that the model addresses the primary representation challenges identified in archival literature, from both a technological and methodological standpoint. The ultimate goal is to bring the output par excellence of archival science, i.e. the finding aid, more in line with the latest developments in archival thinking

    Semantic Plug & Play - Selbstbeschreibende Hardware fĂŒr modulare Robotersysteme

    Get PDF
    Moderne Robotersysteme bestehen aus einer Vielzahl unterschiedlicher Sensoren und Aktuatoren, aus deren Zusammenwirken verschiedene FĂ€higkeiten entstehen und nutzbar gemacht werden. So kann ein Knickarmroboter ĂŒber die koordinierte Ansteuerung mehrerer Motoren GegenstĂ€nde greifen, oder ein Quadrocopter ĂŒber Sensoren seine Lage und Position bestimmen. Eine besondere AusprĂ€gung bilden modulare Robotersysteme, in denen sich Sensoren und Aktuatoren dynamisch entfernen, austauschen oder hinzufĂŒgen lassen, wodurch auch die verfĂŒgbaren FĂ€higkeiten beeinflusst werden. Die FlexibilitĂ€t modularer Robotersysteme wird jedoch durch deren eingeschrĂ€nkte KompatibilitĂ€t begrenzt. So existieren zahlreiche proprietĂ€re Systeme, die zwar eine einfache Verwendung ermöglichen aber nur auf eine begrenzte Menge an modularen Elementen zurĂŒckgreifen können. Open-Source-Projekte mit einer breiten UnterstĂŒtzung im Hardwarebereich, wie bspw. die Arduino-Plattform, oder Softwareprojekte, wie das Robot Operating System (ROS) versuchen, eine eben solch breite KompatibilitĂ€t zu bieten, erfordern allerdings eine sehr ausfĂŒhrliche Dokumentation der Hardware fĂŒr die Integration. Das zentrale Ergebnis dieser Dissertation ist ein Technologiestack (Semantic Plug & Play) fĂŒr die einfache Dokumentation und Integration modularer Hardwareelemente durch Selbstbeschreibungsmechanismen. In vielen Anwendungen befindet sich die Dokumentation ĂŒblicherweise verteilt in Textdokumenten, Onlineinhalten und Quellcodedokumentationen. In Semantic Plug & Play wird ein System basierend auf den Technologien des Semantic Web vorgestellt, das nicht nur eben solch vorhandene Dokumentationen vereinheitlicht und kollektiviert, sondern das auch durch eine maschinenlesbare Aufbereitung die Dokumentation in der Prozessdefinition verwendet werden kann. Eine in dieser Dissertation entwickelte Architektur bietet fĂŒr die Prozessdefinition eine API fĂŒr objektorientierte Programmiersprachen, in der abstrakte FĂ€higkeiten verwendet werden können. Mit einem besonderen Fokus auf zur Laufzeit rekonfigurierbare Systeme können damit FĂ€higkeiten ĂŒber Anforderungen an aktuelle Hardwarekonfigurationen ausgedrĂŒckt werden. So ist es möglich, qualitative und quantitative Eigenschaften als Voraussetzung fĂŒr FĂ€higkeiten zu definieren, die erst bei einem Wechsel modularer Hardwareelemente erfĂŒllt werden. Diesem Prinzip folgend werden auch kombinierte FĂ€higkeiten unterstĂŒtzt, die andere FĂ€higkeiten hardwareĂŒbergreifend fĂŒr ihre intrinsische AusfĂŒhrung nutzen. FĂŒr die Kapselung der Selbstbeschreibung auf einzelnen Hardwareelementen werden unterschiedliche Adapter in Semantic Plug & Play unterstĂŒtzt, wie etwa Mikrocontroller oder X86- und ARM-Systeme. Semantic Plug & Play ermöglicht zudem eine Erweiterbarkeit zu ROS anhand unterschiedlicher Werkzeuge, die nicht nur eine hybride Nutzung erlauben, sondern auch die KomplexitĂ€t mit modellgetriebenen AnsĂ€tzen beherrschbar machen. Die FlexibilitĂ€t von Semantic Plug & Play wird in sechs Experimenten anhand unterschiedlicher Hardware illustriert. Alle Experimente adressieren dabei Problemstellungen einer ĂŒbergeordneten Fallstudie, fĂŒr die ein heterogener Quadrocopterschwarm in hochgradig dynamischen Szenarien eingesetzt und gezielt rekonfiguriert wird
    • 

    corecore