27 research outputs found

    O4OA Specification

    Full text link
    This document is the reference ontology specification for the Ontology for Ontological Analysis (O4OA) version 2.6.This work has been developed under the project Digital Knowledge Graph – Adaptable Analytics API with the financial support of Accenture LTD, the Generalitat Valenciana through the CoMoDiD project (CIPROM/2021/023), the Spanish State Research Agency through the DELFOS (PDC2021-121243-I00) and SREC (PID2021-123824OB-I00) projects, MICIN/AEI/10.13039/501 100011033 and co-financed with ERDF and the European Union Next Generation EU/PRTR.Franco Martins Souza, B.; Guizzardi, R.; Pastor López, O. (2023). O4OA Specification. http://hdl.handle.net/10251/19672

    Understanding the Code of Life: Holistic Conceptual Modeling of the Genome

    Full text link
    [ES] En las últimas décadas, los avances en la tecnología de secuenciación han producido cantidades significativas de datos genómicos, hecho que ha revolucionado nuestra comprensión de la biología. Sin embargo, la cantidad de datos generados ha superado con creces nuestra capacidad para interpretarlos. Descifrar el código de la vida es un gran reto. A pesar de los numerosos avances realizados, nuestra comprensión del mismo sigue siendo mínima, y apenas estamos empezando a descubrir todo su potencial, por ejemplo, en áreas como la medicina de precisión o la farmacogenómica. El objetivo principal de esta tesis es avanzar en nuestra comprensión de la vida proponiendo una aproximación holística mediante un enfoque basado en modelos que consta de tres artefactos: i) un esquema conceptual del genoma, ii) un método para su aplicación en el mundo real, y iii) el uso de ontologías fundacionales para representar el conocimiento del dominio de una forma más precisa y explícita. Las dos primeras contribuciones se han validado mediante la implementación de sistemas de información genómicos basados en modelos conceptuales. La tercera contribución se ha validado mediante experimentos empíricos que han evaluado si el uso de ontologías fundacionales conduce a una mejor comprensión del dominio genómico. Los artefactos generados ofrecen importantes beneficios. En primer lugar, se han generado procesos de gestión de datos más eficientes, lo que ha permitido mejorar los procesos de extracción de conocimientos. En segundo lugar, se ha logrado una mejor comprensión y comunicación del dominio.[CA] En les últimes dècades, els avanços en la tecnologia de seqüenciació han produït quantitats significatives de dades genòmiques, fet que ha revolucionat la nostra comprensió de la biologia. No obstant això, la quantitat de dades generades ha superat amb escreix la nostra capacitat per a interpretar-los. Desxifrar el codi de la vida és un gran repte. Malgrat els nombrosos avanços realitzats, la nostra comprensió del mateix continua sent mínima, i a penes estem començant a descobrir tot el seu potencial, per exemple, en àrees com la medicina de precisió o la farmacogenómica. L'objectiu principal d'aquesta tesi és avançar en la nostra comprensió de la vida proposant una aproximació holística mitjançant un enfocament basat en models que consta de tres artefactes: i) un esquema conceptual del genoma, ii) un mètode per a la seua aplicació en el món real, i iii) l'ús d'ontologies fundacionals per a representar el coneixement del domini d'una forma més precisa i explícita. Les dues primeres contribucions s'han validat mitjançant la implementació de sistemes d'informació genòmics basats en models conceptuals. La tercera contribució s'ha validat mitjançant experiments empírics que han avaluat si l'ús d'ontologies fundacionals condueix a una millor comprensió del domini genòmic. Els artefactes generats ofereixen importants beneficis. En primer lloc, s'han generat processos de gestió de dades més eficients, la qual cosa ha permés millorar els processos d'extracció de coneixements. En segon lloc, s'ha aconseguit una millor comprensió i comunicació del domini.[EN] Over the last few decades, advances in sequencing technology have produced significant amounts of genomic data, which has revolutionised our understanding of biology. However, the amount of data generated has far exceeded our ability to interpret it. Deciphering the code of life is a grand challenge. Despite our progress, our understanding of it remains minimal, and we are just beginning to uncover its full potential, for instance, in areas such as precision medicine or pharmacogenomics. The main objective of this thesis is to advance our understanding of life by proposing a holistic approach, using a model-based approach, consisting of three artifacts: i) a conceptual schema of the genome, ii) a method for its application in the real-world, and iii) the use of foundational ontologies to represent domain knowledge in a more unambiguous and explicit way. The first two contributions have been validated by implementing genome information systems based on conceptual models. The third contribution has been validated by empirical experiments assessing whether using foundational ontologies leads to a better understanding of the genomic domain. The artifacts generated offer significant benefits. First, more efficient data management processes were produced, leading to better knowledge extraction processes. Second, a better understanding and communication of the domain was achieved.Las fructíferas discusiones y los resultados derivados de los proyectos INNEST2021 /57, MICIN/AEI/10.13039/501100011033, PID2021-123824OB-I00, CIPROM/2021/023 y PDC2021- 121243-I00 han contribuido en gran medida a la calidad final de este tesis.García Simón, A. (2022). Understanding the Code of Life: Holistic Conceptual Modeling of the Genome [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19143

    Knowledge Representation in Engineering 4.0

    Get PDF
    This dissertation was developed in the context of the BMBF and EU/ECSEL funded projects GENIAL! and Arrowhead Tools. In these projects the chair examines methods of specifications and cooperations in the automotive value chain from OEM-Tier1-Tier2. Goal of the projects is to improve communication and collaborative planning, especially in early development stages. Besides SysML, the use of agreed vocabularies and on- tologies for modeling requirements, overall context, variants, and many other items, is targeted. This thesis proposes a web database, where data from the collaborative requirements elicitation is combined with an ontology-based approach that uses reasoning capabilities. For this purpose, state-of-the-art ontologies have been investigated and integrated that entail domains like hardware/software, roadmapping, IoT, context, innovation and oth- ers. New ontologies have been designed like a HW / SW allocation ontology and a domain-specific "eFuse ontology" as well as some prototypes. The result is a modular ontology suite and the GENIAL! Basic Ontology that allows us to model automotive and microelectronic functions, components, properties and dependencies based on the ISO26262 standard among these elements. Furthermore, context knowledge that influences design decisions such as future trends in legislation, society, environment, etc. is included. These knowledge bases are integrated in a novel tool that allows for collabo- rative innovation planning and requirements communication along the automotive value chain. To start off the work of the project, an architecture and prototype tool was developed. Designing ontologies and knowing how to use them proved to be a non-trivial task, requiring a lot of context and background knowledge. Some of this background knowledge has been selected for presentation and was utilized either in designing models or for later immersion. Examples are basic foundations like design guidelines for ontologies, ontology categories and a continuum of expressiveness of languages and advanced content like multi-level theory, foundational ontologies and reasoning. Finally, at the end, we demonstrate the overall framework, and show the ontology with reasoning, database and APPEL/SysMD (AGILA ProPErty and Dependency Descrip- tion Language / System MarkDown) and constraints of the hardware / software knowledge base. There, by example, we explore and solve roadmap constraints that are coupled with a car model through a constraint solver.Diese Dissertation wurde im Kontext des von BMBF und EU / ECSEL gefördertem Projektes GENIAL! und Arrowhead Tools entwickelt. In diesen Projekten untersucht der Lehrstuhl Methoden zur Spezifikationen und Kooperation in der Automotive Wertschöp- fungskette, von OEM zu Tier1 und Tier2. Ziel der Arbeit ist es die Kommunikation und gemeinsame Planung, speziell in den frühen Entwicklungsphasen zu verbessern. Neben SysML ist die Benutzung von vereinbarten Vokabularen und Ontologien in der Modellierung von Requirements, des Gesamtkontextes, Varianten und vielen anderen Elementen angezielt. Ontologien sind dabei eine Möglichkeit, um das Vermeiden von Missverständnissen und Fehlplanungen zu unterstützen. Dieser Ansatz schlägt eine Web- datenbank vor, wobei Ontologien das Teilen von Wissen und das logische Schlussfolgern von implizitem Wissen und Regeln unterstützen. Diese Arbeit beschreibt Ontologien für die Domäne des Engineering 4.0, oder spezifischer, für die Domäne, die für das deutsche Projekt GENIAL! benötigt wurde. Dies betrifft Domänen, wie Hardware und Software, Roadmapping, Kontext, Innovation, IoT und andere. Neue Ontologien wurden entworfen, wie beispielsweise die Hardware-Software Allokations-Ontologie und eine domänen-spezifische "eFuse Ontologie". Das Ergebnis war eine modulare Ontologie-Bibliothek mit der GENIAL! Basic Ontology, die es erlaubt, automotive und mikroelektronische Komponenten, Funktionen, Eigenschaften und deren Abhängigkeiten basierend auf dem ISO26262 Standard zu entwerfen. Des weiteren ist Kontextwissen, welches Entwurfsentscheidungen beinflusst, inkludiert. Diese Wissensbasen sind in einem neuartigen Tool integriert, dass es ermöglicht, Roadmapwissen und Anforderungen durch die Automobil- Wertschöpfungskette hinweg auszutauschen. On tologien zu entwerfen und zu wissen, wie man diese benutzt, war dabei keine triviale Aufgabe und benötigte viel Hintergrund- und Kontextwissen. Ausgewählte Grundlagen hierfür sind Richtlinien, wie man Ontologien entwirft, Ontologiekategorien, sowie das Spektrum an Sprachen und Formen von Wissensrepresentationen. Des weiteren sind fort- geschrittene Methoden erläutert, z.B wie man mit Ontologien Schlußfolgerungen trifft. Am Schluss wird das Overall Framework demonstriert, und die Ontologie mit Reason- ing, Datenbank und APPEL/SysMD (AGILA ProPErty and Dependency Description Language / System MarkDown) und Constraints der Hardware / Software Wissensbasis gezeigt. Dabei werden exemplarisch Roadmap Constraints mit dem Automodell verbunden und durch den Constraint Solver gelöst und exploriert

    Foundational Ontologies meet Ontology Matching: A Survey

    Get PDF
    Ontology matching is a research area aimed at finding ways to make different ontologies interoperable. Solutions to the problem have been proposed from different disciplines, including databases, natural language processing, and machine learning. The role of foundational ontologies for ontology matching is an important one. It is multifaceted and with room for development. This paper presents an overview of the different tasks involved in ontology matching that consider foundational ontologies. We discuss the strengths and weaknesses of existing proposals and highlight the challenges to be addressed in the future

    JURI SAYS:An Automatic Judgement Prediction System for the European Court of Human Rights

    Get PDF
    In this paper we present the web platform JURI SAYS that automatically predicts decisions of the European Court of Human Rights based on communicated cases, which are published by the court early in the proceedings and are often available many years before the final decision is made. Our system therefore predicts future judgements of the court. The platform is available at jurisays.com and shows the predictions compared to the actual decisions of the court. It is automatically updated every month by including the prediction for the new cases. Additionally, the system highlights the sentences and paragraphs that are most important for the prediction (i.e. violation vs. no violation of human rights)

    Pitfalls in Ontologies and TIPS to Prevent Them

    Get PDF
    Abstract. A growing number of ontologies are already available thanks to development initiatives in many different fields. In such ontology developments, developers must tackle a wide range of difficulties and handicaps, which can result in the appearance of anomalies in the resulting ontologies. Therefore, ontology evaluation plays a key role in ontology development. OOPS! is an on-line tool that automatically detects pitfalls, considered as potential errors or problems-and thus may help ontology developers to improve their ontologies. To gain insight in the existence of pitfalls and to assess whether there are differences among ontologies developed by novices, a random set of already scanned ontologies, and existing well-known ones, data of 406 OWL ontologies were analysed on OOPS!'s 21 pitfalls, of which 24 ontologies were also examined manually on the detected pitfalls. The various analyses performed show only minor differences between the three sets of ontologies, therewith providing a general landscape of pitfalls in ontologies. We also propose guidelines to avoid the inclusion of such common pitfalls in new ontologies, the Typical pItfalls Prevention Scheme (TIPS), so as to increase the baseline quality of OWL ontologies

    Comparing traditional conceptual modeling with ontology-driven conceptual modeling: An empirical study

    Full text link
    [EN] This paper conducts an empirical study that explores the differences between adopting a traditional conceptual modeling (TCM) technique and an ontology-driven conceptual modeling (ODCM) technique with the objective to understand and identify in which modeling situations an ODCM technique can prove beneficial compared to a TCM technique. More specifically, we asked ourselves if there exist any meaningful differences in the resulting conceptual model and the effort spent to create such model between novice modelers trained in an ontology-driven conceptual modeling technique and novice modelers trained in a traditional conceptual modeling technique. To answer this question, we discuss previous empirical research efforts and distill these efforts into two hypotheses. Next, these hypotheses are tested in a rigorously developed experiment, where a total of 100 students from two different Universities participated. The findings of our empirical study confirm that there do exist meaningful differences between adopting the two techniques. We observed that novice modelers applying the ODCM technique arrived at higher quality models compared to novice modelers applying the TCM technique. More specifically, the results of the empirical study demonstrated that it is advantageous to apply an ODCM technique over an TCM when having to model the more challenging and advanced facets of a certain domain or scenario. Moreover, we also did not find any significant difference in effort between applying these two techniques. Finally, we specified our results in three findings that aim to clarify the obtained results. (C) 2018 Elsevier Ltd. All rights reserved.This research has been funded by the Ghent University Special Research Fund (BOF 01N02014) and the National Bank of Belgium.Verdonck, M.; Gailly, F.; Pergl, R.; Guizzardi, G.; Franco Martins, B.; Pastor López, O. (2019). Comparing traditional conceptual modeling with ontology-driven conceptual modeling: An empirical study. Information Systems. 81:92-103. https://doi.org/10.1016/j.is.2018.11.009S921038

    Quality Evaluation of Requirements Models: The Case of Goal Models and Scenarios

    Get PDF
    Context: Requirements Engineering approaches provide expressive model techniques for requirements elicitation and analysis. Yet, these approaches struggle to manage the quality of their models, causing difficulties in understanding requirements, and increase development costs. The models’ quality should be a permanent concern. Objectives: We propose a mixed-method process for the quantitative evaluation of the quality of requirements models and their modelling activities. We applied the process to goal-oriented (i* 1.0 and iStar 2.0) and scenario-based (ARNE and ALCO use case templates) models, to evaluate their usability in terms of appropriateness recognisability and learnability. We defined (bio)metrics about the models and the way stakeholders interact with them, with the GQM approach. Methods: The (bio)metrics were evaluated through a family of 16 quasi-experiments with a total of 660 participants. They performed creation, modification, understanding, and review tasks on the models. We measured their accuracy, speed, and ease, using metrics of task success, time, and effort, collected with eye-tracking, electroencephalography and electro-dermal activity, and participants’ opinion, through NASA-TLX. We characterised the participants with GenderMag, a method for evaluating usability with a focus on gender-inclusiveness. Results: For i*, participants had better performance and lower effort when using iStar 2.0, and produced models with lower accidental complexity. For use cases, participants had better performance and lower effort when using ALCO. Participants using a textual representation of requirements had higher performance and lower effort. The results were better for ALCO, followed by ARNE, iStar 2.0, and i* 1.0. Participants with a comprehensive information processing and a conservative attitude towards risk (characteristics that are frequently seen in females) took longer to start the tasks but had a higher accuracy. The visual and mental effort was also higher for these participants. Conclusions: A mixed-method process, with (bio)metric measurements, can provide reliable quantitative information about the success and effort of a stakeholder while working on different requirements models’ tasks
    corecore