4,170 research outputs found

    Facets, Tiers and Gems: Ontology Patterns for Hypernormalisation

    Get PDF
    There are many methodologies and techniques for easing the task of ontology building. Here we describe the intersection of two of these: ontology normalisation and fully programmatic ontology development. The first of these describes a standardized organisation for an ontology, with singly inherited self-standing entities, and a number of small taxonomies of refining entities. The former are described and defined in terms of the latter and used to manage the polyhierarchy of the self-standing entities. Fully programmatic development is a technique where an ontology is developed using a domain-specific language within a programming language, meaning that as well defining ontological entities, it is possible to add arbitrary patterns or new syntax within the same environment. We describe how new patterns can be used to enable a new style of ontology development that we call hypernormalisation

    Linked Data for the Natural Sciences. Two Use Cases in Chemistry and Biology

    Get PDF
    Wiljes C, Cimiano P. Linked Data for the Natural Sciences. Two Use Cases in Chemistry and Biology. In: Proceedings of the Workshop on the Semantic Publishing (SePublica 2012). 2012: 48-59.The Web was designed to improve the way people work together. The Semantic Web extends the Web with a layer of Linked Data that offers new paths for scientific publishing and co-operation. Experimental raw data, released as Linked Data, could be discovered automatically, fostering its reuse and validation by scientists in different contexts and across the boundaries of disciplines. However, the technological barrier for scientists who want to publish and share their research data as Linked Data remains rather high. We present two real-life use cases in the fields of chemistry and biology and outline a general methodology for transforming research data into Linked Data. A key element of our methodology is the role of a scientific data curator, who is proficient in Linked Data technologies and works in close co-operation with the scientist

    XML in Motion from Genome to Drug

    Get PDF
    Information technology (IT) has emerged as a central to the solution of contemporary genomics and drug discovery problems. Researchers involved in genomics, proteomics, transcriptional profiling, high throughput structure determination, and in other sub-disciplines of bioinformatics have direct impact on this IT revolution. As the full genome sequences of many species, data from structural genomics, micro-arrays, and proteomics became available, integration of these data to a common platform require sophisticated bioinformatics tools. Organizing these data into knowledgeable databases and developing appropriate software tools for analyzing the same are going to be major challenges. XML (eXtensible Markup Language) forms the backbone of biological data representation and exchange over the internet, enabling researchers to aggregate data from various heterogeneous data resources. The present article covers a comprehensive idea of the integration of XML on particular type of biological databases mainly dealing with sequence-structure-function relationship and its application towards drug discovery. This e-medical science approach should be applied to other scientific domains and the latest trend in semantic web applications is also highlighted

    Towards a Model of Life and Cognition

    Get PDF
    What should be the ontology of the world such that life and cognition are possible? In this essay, I undertake to outline an alternative ontological foundation which makes biological and cognitive phenomena possible. The foundation is built by defining a model, which is presented in the form of a description of a hypothetical but a logically possible world with a defined ontological base. Biology rests today on quite a few not so well connected foundations: molecular biology based on the genetic dogma; evolutionary biology based on neo-Darwinian model; ecology based on systems view; developmental biology by morphogenetic models; connectionist models for neurophysiology and cognitive biology; pervasive teleonomic explanations for the goal-directed behavior across the discipline; etc. Can there be an underlying connecting theme or a model which could make these seemingly disparate domains interconnected? I shall atempt to answer this question. By following the semantic view of scientific theories, I tend to believe that the models employed by the present physical sciences are not rich enough to capture biological (and some of the non-biological) systems. A richer theory that could capture biological reality could also capture physical and chemical phenomena as limiting cases, but not vice versa

    Understanding Science Through Knowledge Organizers: An Introduction

    Get PDF
    We propose, in this paper, a teaching program based on a grammar of scientific language borrowed mostly from the area of knowledge representation in computer science and logic. The paper introduces an operationizable framework for understanding knowledge using knowledge representation (KR) methodology. We start with organizing concepts based on their cognitive function, followed by assigning valid and authentic semantic relations to the concepts. We propose that in science education, students can understand better if they organize their knowledge using the KR principles. The process, we claim, can help them to align their conceptual framework with that of experts which we assume is the goal of science education

    Graph Kernels and Applications in Bioinformatics

    Get PDF
    In recent years, machine learning has emerged as an important discipline. However, despite the popularity of machine learning techniques, data in the form of discrete structures are not fully exploited. For example, when data appear as graphs, the common choice is the transformation of such structures into feature vectors. This procedure, though convenient, does not always effectively capture topological relationships inherent to the data; therefore, the power of the learning process may be insufficient. In this context, the use of kernel functions for graphs arises as an attractive way to deal with such structured objects. On the other hand, several entities in computational biology applications, such as gene products or proteins, may be naturally represented by graphs. Hence, the demanding need for algorithms that can deal with structured data poses the question of whether the use of kernels for graphs can outperform existing methods to solve specific computational biology problems. In this dissertation, we address the challenges involved in solving two specific problems in computational biology, in which the data are represented by graphs. First, we propose a novel approach for protein function prediction by modeling proteins as graphs. For each of the vertices in a protein graph, we propose the calculation of evolutionary profiles, which are derived from multiple sequence alignments from the amino acid residues within each vertex. We then use a shortest path graph kernel in conjunction with a support vector machine to predict protein function. We evaluate our approach under two instances of protein function prediction, namely, the discrimination of proteins as enzymes, and the recognition of DNA binding proteins. In both cases, our proposed approach achieves better prediction performance than existing methods. Second, we propose two novel semantic similarity measures for proteins based on the gene ontology. The first measure directly works on the gene ontology by combining the pairwise semantic similarity scores between sets of annotating terms for a pair of input proteins. The second measure estimates protein semantic similarity using a shortest path graph kernel to take advantage of the rich semantic knowledge contained within ontologies. Our comparison with other methods shows that our proposed semantic similarity measures are highly competitive and the latter one outperforms state-of-the-art methods. Furthermore, our two methods are intrinsic to the gene ontology, in the sense that they do not rely on external sources to calculate similarities

    Calling International Rescue: knowledge lost in literature and data landslide!

    Get PDF
    We live in interesting times. Portents of impending catastrophe pervade the literature, calling us to action in the face of unmanageable volumes of scientific data. But it isn't so much data generation per se, but the systematic burial of the knowledge embodied in those data that poses the problem: there is so much information available that we simply no longer know what we know, and finding what we want is hard – too hard. The knowledge we seek is often fragmentary and disconnected, spread thinly across thousands of databases and millions of articles in thousands of journals. The intellectual energy required to search this array of data-archives, and the time and money this wastes, has led several researchers to challenge the methods by which we traditionally commit newly acquired facts and knowledge to the scientific record. We present some of these initiatives here – a whirlwind tour of recent projects to transform scholarly publishing paradigms, culminating in Utopia and the Semantic Biochemical Journal experiment. With their promises to provide new ways of interacting with the literature, and new and more powerful tools to access and extract the knowledge sequestered within it, we ask what advances they make and what obstacles to progress still exist? We explore these questions, and, as you read on, we invite you to engage in an experiment with us, a real-time test of a new technology to rescue data from the dormant pages of published documents. We ask you, please, to read the instructions carefully. The time has come: you may turn over your papers
    corecore