1,045 research outputs found

    Analytic Metaphysics versus Naturalized Metaphysics: The Relevance of Applied Ontology

    Get PDF
    The relevance of analytic metaphysics has come under criticism: Ladyman & Ross, for instance, have suggested do discontinue the field. French & McKenzie have argued in defense of analytic metaphysics that it develops tools that could turn out to be useful for philosophy of physics. In this article, we show first that this heuristic defense of metaphysics can be extended to the scientific field of applied ontology, which uses constructs from analytic metaphysics. Second, we elaborate on a parallel by French & McKenzie between mathematics and metaphysics to show that the whole field of analytic metaphysics, being useful not only for philosophy but also for science, should continue to exist as a largely autonomous field

    Doctor of Philosophy

    Get PDF
    dissertationOver 40 years ago, the first computer simulation of a protein was reported: the atomic motions of a 58 amino acid protein were simulated for few picoseconds. With today's supercomputers, simulations of large biomolecular systems with hundreds of thousands of atoms can reach biologically significant timescales. Through dynamics information biomolecular simulations can provide new insights into molecular structure and function to support the development of new drugs or therapies. While the recent advances in high-performance computing hardware and computational methods have enabled scientists to run longer simulations, they also created new challenges for data management. Investigators need to use local and national resources to run these simulations and store their output, which can reach terabytes of data on disk. Because of the wide variety of computational methods and software packages available to the community, no standard data representation has been established to describe the computational protocol and the output of these simulations, preventing data sharing and collaboration. Data exchange is also limited due to the lack of repositories and tools to summarize, index, and search biomolecular simulation datasets. In this dissertation a common data model for biomolecular simulations is proposed to guide the design of future databases and APIs. The data model was then extended to a controlled vocabulary that can be used in the context of the semantic web. Two different approaches to data management are also proposed. The iBIOMES repository offers a distributed environment where input and output files are indexed via common data elements. The repository includes a dynamic web interface to summarize, visualize, search, and download published data. A simpler tool, iBIOMES Lite, was developed to generate summaries of datasets hosted at remote sites where user privileges and/or IT resources might be limited. These two informatics-based approaches to data management offer new means for the community to keep track of distributed and heterogeneous biomolecular simulation data and create collaborative networks

    Investigations Into Web Science and the Concept of Web Life

    Get PDF
    Our increasing ability to construct large and complex computer and information systems suggests that the classical manner in which such systems are understood and architected is inappropriate for the open and unstructured manner in which they are often used. With the appearance of mathematically complex and, more importantly, high scale, non-deterministic systems, such as the World Wide Web, there is a need to understand, construct and maintain systems in a world where their assembly and use may not be precisely predicted. In Addition, few have thus far attempted to study such Web-scale systems holistically so as to understand the implications of non-programmable characteristics, like emergence and evolution – a matter of particular relevance in the new field of Web Science. This collection of prior published works and their associated commentary hence brings together a number of themes focused on Web Science and its broader application in systems and software engineering. It primarily rests on materials presented in the book The Web’s Awake, first published in April 2007

    Ontology-oriented e-government service integration utilising the semantic web

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.E-government service integration process has recently become an important research topic in e-government domain since many countries have developed various levels of e-government services. Non-interoperability between government agencies in service delivery implementation and platform posing the technical challenge, and the lack of the formulated modelling framework is the main methodological obstacle on the way of achieving dynamic delivery of integrated e-government services. This research is a study of the problems associated with the integration and delivery of integrated e-government services, and proposes a novel solution to tackle them. We start with investigating the fundamentals of e-government as a field of research to build a sensible argument for the questions investigated by this research, which lead to the exposure of the methodological as well as technological problems with the mechanics of e-government in the areas of service integration and delivery. The outcomes of this study in Chapters 3, 4, 5, 6, and 7 respectively 1) suggests the most practically relevant and technically possible evolutionary pathway to e-government transformation, 2) proposes a modified software engineering process to achieve such transformation, 3) develops an innovative framework for modelling the service integration, 4) proposes an ontology as its knowledgebase, and 5) develops an innovative and intelligent software to support the practice of service integration and delivery. These outcomes collectively result in the introduction of a novel, complete and coherent solution for the abovementioned problems. This research is a cross disciplinary study of software integration engineering frameworks, e-government service delivery platform and semantic web technology, all working to devise the most efficient and robust framework of using semantic web capabilities to enable the delivery of integrated e-government services in an intelligent platform

    Mechanical testing ontology for digital-twins: A roadmap based on EMMO

    Get PDF
    The enormous amount of materials data currently generated by high throughput experiments and computations poses a significant challenge in terms of data integration and sharing. A common ontology lays the foundation for solving this issue, enabling semantic interoperability of models, experiments, software and data which is vital for a more rational and efficient development of novel materials. This paper is based on the current efforts by the European Materials Modelling Council (EMMC) on establishing common standards for materials through the European Materials & Modelling Ontology (EMMO) and demonstrates the application of EMMO to the mechanical testing field. The focus of this paper is to outline the approach to develop EMMO compliant domain ontologies

    On the Role of Assertions for Conceptual Modeling as Enablers of Composable Simulation Solutions

    Get PDF
    This research provides a much needed systematic review of the roles that assertions play in model composability and simulation interoperability. In doing so, this research contributes a partial solution to one of the problems of model composability and simulation interoperability—namely, why do simulation systems fail to achieve the maximum level of interoperability possible? It demonstrates the importance of the assertions that are made during model development and simulation implementation, particularly as they reflect the unique viewpoint of each developer or user. It hypothesizes that it is possible to detect composability conflicts by means of a four-step process developed by the author for capturing and comparing assertions. It demonstrates the process using a well understood example problem—the Falling Body Problem—developing a formal model of assertion, a strategy for assertion comparison, an inventory of forces, and a catalog of significant assertions that might be made for each term in the solution to the problem. Finally, it develops a software application to implement the strategy for comparing sets of assertions. The software successfully detects potential conflicts between ontologies that were otherwise determined to be ontologically consistent, thus proving the hypothesis

    Box2^2EL: Concept and Role Box Embeddings for the Description Logic EL++

    Full text link
    Description logic (DL) ontologies extend knowledge graphs (KGs) with conceptual information and logical background knowledge. In recent years, there has been growing interest in inductive reasoning techniques for such ontologies, which promise to complement classical deductive reasoning algorithms. Similar to KG completion, several existing approaches learn ontology embeddings in a latent space, while additionally ensuring that they faithfully capture the logical semantics of the underlying DL. However, they suffer from several shortcomings, mainly due to a limiting role representation. We propose Box2^2EL, which represents both concepts and roles as boxes (i.e., axis-aligned hyperrectangles) and demonstrate how it overcomes the limitations of previous methods. We theoretically prove the soundness of our model and conduct an extensive experimental evaluation, achieving state-of-the-art results across a variety of datasets. As part of our evaluation, we introduce a novel benchmark for subsumption prediction involving both atomic and complex concepts

    PERICLES Deliverable 4.3:Content Semantics and Use Context Analysis Techniques

    Get PDF
    The current deliverable summarises the work conducted within task T4.3 of WP4, focusing on the extraction and the subsequent analysis of semantic information from digital content, which is imperative for its preservability. More specifically, the deliverable defines content semantic information from a visual and textual perspective, explains how this information can be exploited in long-term digital preservation and proposes novel approaches for extracting this information in a scalable manner. Additionally, the deliverable discusses novel techniques for retrieving and analysing the context of use of digital objects. Although this topic has not been extensively studied by existing literature, we believe use context is vital in augmenting the semantic information and maintaining the usability and preservability of the digital objects, as well as their ability to be accurately interpreted as initially intended.PERICLE
    corecore