33,247 research outputs found
The Blood Ontology: An ontology in the domain of hematology
Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and some of its anticipated uses
Use of Wikipedia Categories in Entity Ranking
Wikipedia is a useful source of knowledge that has many applications in
language processing and knowledge representation. The Wikipedia category graph
can be compared with the class hierarchy in an ontology; it has some
characteristics in common as well as some differences. In this paper, we
present our approach for answering entity ranking queries from the Wikipedia.
In particular, we explore how to make use of Wikipedia categories to improve
entity ranking effectiveness. Our experiments show that using categories of
example entities works significantly better than using loosely defined target
categories
Towards improving web service repositories through semantic web techniques
The success of the Web services technology has brought topicsas software reuse and discovery once again on the agenda of software engineers. While there are several efforts towards automating Web service discovery and composition, many developers still search for services
via online Web service repositories and then combine them manually. However, from our analysis of these repositories, it yields that, unlike traditional software libraries, they rely on little metadata to support
service discovery. We believe that the major cause is the difficulty of automatically deriving metadata that would describe rapidly changing Web service collections. In this paper, we discuss the major shortcomings of state of the art Web service repositories and, as a solution, we
report on ongoing work and ideas on how to use techniques developed in the context of the Semantic Web (ontology learning, mapping, metadata based presentation) to improve the current situation
Ontology of core data mining entities
In this article, we present OntoDM-core, an ontology of core data mining
entities. OntoDM-core defines themost essential datamining entities in a three-layered
ontological structure comprising of a specification, an implementation and an application
layer. It provides a representational framework for the description of mining
structured data, and in addition provides taxonomies of datasets, data mining tasks,
generalizations, data mining algorithms and constraints, based on the type of data.
OntoDM-core is designed to support a wide range of applications/use cases, such as
semantic annotation of data mining algorithms, datasets and results; annotation of
QSAR studies in the context of drug discovery investigations; and disambiguation of
terms in text mining. The ontology has been thoroughly assessed following the practices
in ontology engineering, is fully interoperable with many domain resources and
is easy to extend
Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback
Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector
Introduction
Husserl’s philosophy, by the usual account, evolved through three stages: 1. development of an anti-psychologistic, objective foundation of logic and mathematics, rooted in Brentanian descriptive psychology; 2. development of a new discipline of "phenomenology" founded on a metaphysical position dubbed "transcendental idealism"; transformation of phenomenology from a form of methodological solipsism into a phenomenology of intersubjectivity and ultimately (in his Crisis of 1936) into an ontology of the life-world, embracing the social worlds of culture and history. We show that this story of three revolutions can provide at best a preliminary orientation, and that Husserl was constantly expanding and revising his philosophical system, integrating views in phenomenology, ontology, epistemology and logic with views on the nature and tasks of philosophy and science as well as on the nature of culture and the world in ways that reveal more common elements than violent shifts of direction. We argue further that Husserl is a seminal figure in the evolution from traditional philosophy to the characteristic philosophical concerns of the late twentieth century: concerns with representation and intentionality and with problems at the borderlines of the philosophy of mind, ontology, and cognitive science
Genome-wide signatures of complex introgression and adaptive evolution in the big cats.
The great cats of the genus Panthera comprise a recent radiation whose evolutionary history is poorly understood. Their rapid diversification poses challenges to resolving their phylogeny while offering opportunities to investigate the historical dynamics of adaptive divergence. We report the sequence, de novo assembly, and annotation of the jaguar (Panthera onca) genome, a novel genome sequence for the leopard (Panthera pardus), and comparative analyses encompassing all living Panthera species. Demographic reconstructions indicated that all of these species have experienced variable episodes of population decline during the Pleistocene, ultimately leading to small effective sizes in present-day genomes. We observed pervasive genealogical discordance across Panthera genomes, caused by both incomplete lineage sorting and complex patterns of historical interspecific hybridization. We identified multiple signatures of species-specific positive selection, affecting genes involved in craniofacial and limb development, protein metabolism, hypoxia, reproduction, pigmentation, and sensory perception. There was remarkable concordance in pathways enriched in genomic segments implicated in interspecies introgression and in positive selection, suggesting that these processes were connected. We tested this hypothesis by developing exome capture probes targeting ~19,000 Panthera genes and applying them to 30 wild-caught jaguars. We found at least two genes (DOCK3 and COL4A5, both related to optic nerve development) bearing significant signatures of interspecies introgression and within-species positive selection. These findings indicate that post-speciation admixture has contributed genetic material that facilitated the adaptive evolution of big cat lineages
The Infectious Disease Ontology in the Age of COVID-19
The Infectious Disease Ontology (IDO) is a suite of interoperable ontology modules that aims to provide coverage of all aspects of the infectious disease domain, including biomedical research, clinical care, and public health. IDO Core is designed to be a disease and pathogen neutral ontology, covering just those types of entities and relations that are relevant to infectious diseases generally. IDO Core is then extended by a collection of ontology modules focusing on specific diseases and pathogens. In this paper we present applications of IDO Core within various areas of infectious disease research, together with an overview of all IDO extension ontologies and the methodology on the basis of which they are built. We also survey recent developments involving IDO, including the creation of IDO Virus; the Coronaviruses Infectious Disease Ontology (CIDO); and an extension of CIDO focused on COVID-19 (IDO-CovID-19).We also discuss how these ontologies might assist in information-driven efforts to deal with the ongoing COVID-19 pandemic, to accelerate data discovery in the early stages of future pandemics, and to promote reproducibility of infectious disease research
Determination of competency framework for technical and vocational education and training (TVET) educators in Nigerian tertiary institutions
Lack of competent TVET Educators in Nigerian institutions has led to several problems such as low quality graduates and unemployment. Competency is a vital element for assessing the quality of technical and vocational education and training (TVET) Educators. Therefore, this research investigated the TVET Educators ’ perceptions on competency needs in Nigerian tertiary institutions based on Malaysian Human Resource Development Practitioners (MHRDP) competency model for workplace learning and performance (WLP). Apart from that, this study also aimed at investigating the perception differences on competency elements among difference TVET tertiary institutions in order to enhance their quality. The study was fully quantitative and 218 questionnaires were systematically distributed to the TVET educators from five tertiary institutions based on the stratified sampling technique. A total of 205 questionnaires were returned. Descriptive and inferential statistical methods such as mean, EFA and ANOVA were used to analyse the data. The research found that Nigerian TVET educators perceived all the competency elements (25 constituents) as important; 19 out 25 constituents of competency framework were significantly related to Nigerian tertiary institutions. The research findings also revealed that there was no statistically significant differences among the TVET educators perception on competency elements across different types of TVET tertiary institutions. The developed competency framework for Nigerian TVET tertiary institutions contributes originally to the body of knowledge. The research recommends that government and other relevant authorities should emphasize on the implementation of the framework to tertiary institutions in Nigeria. A similar research should be undertaken to extend the result to reflect other Non-TVET educators in Nigeria
Wolf outside, dog inside? The genomic make-up of the Czechoslovakian Wolfdog
Background
Genomic methods can provide extraordinary tools to explore the genetic background of wild species and domestic breeds, optimize breeding practices, monitor and limit the spread of recessive diseases, and discourage illegal crossings. In this study we analysed a panel of 170k Single Nucleotide Polymorphisms with a combination of multivariate, Bayesian and outlier gene approaches to examine the genome-wide diversity and inbreeding levels in a recent wolf x dog cross-breed, the Czechoslovakian Wolfdog, which is becoming increasingly popular across Europe.
Results
Pairwise FST values, multivariate and assignment procedures indicated that the Czechoslovakian Wolfdog was significantly differentiated from all the other analysed breeds and also well-distinguished from both parental populations (Carpathian wolves and German Shepherds). Coherently with the low number of founders involved in the breed selection, the individual inbreeding levels calculated from homozygosity regions were relatively high and comparable with those derived from the pedigree data. In contrast, the coefficient of relatedness between individuals estimated from the pedigrees often underestimated the identity-by-descent scores determined using genetic profiles. The timing of the admixture and the effective population size trends estimated from the LD patterns reflected the documented history of the breed. Ancestry reconstruction methods identified more than 300 genes with excess of wolf ancestry compared to random expectations, mainly related to key morphological features, and more than 2000 genes with excess of dog ancestry, playing important roles in lipid metabolism, in the regulation of circadian rhythms, in learning and memory processes, and in sociability, such as the COMT gene, which has been described as a candidate gene for the latter trait in dogs.
Conclusions
In this study we successfully applied genome-wide procedures to reconstruct the history of the Czechoslovakian Wolfdog, assess individual wolf ancestry proportions and, thanks to the availability of a well-annotated reference genome, identify possible candidate genes for wolf-like and dog-like phenotypic traits typical of this breed, including commonly inherited disorders. Moreover, through the identification of ancestry-informative markers, these genomic approaches could provide tools for forensic applications to unmask illegal crossings with wolves and uncontrolled trades of recent and undeclared wolfdog hybrids
- …