6,560 research outputs found
Ontology-driven conceptual modeling: A'systematic literature mapping and review
All rights reserved. Ontology-driven conceptual modeling (ODCM) is still a relatively new research domain in the field of information systems and there is still much discussion on how the research in ODCM should be performed and what the focus of this research should be. Therefore, this article aims to critically survey the existing literature in order to assess the kind of research that has been performed over the years, analyze the nature of the research contributions and establish its current state of the art by positioning, evaluating and interpreting relevant research to date that is related to ODCM. To understand and identify any gaps and research opportunities, our literature study is composed of both a systematic mapping study and a systematic review study. The mapping study aims at structuring and classifying the area that is being investigated in order to give a general overview of the research that has been performed in the field. A review study on the other hand is a more thorough and rigorous inquiry and provides recommendations based on the strength of the found evidence. Our results indicate that there are several research gaps that should be addressed and we further composed several research opportunities that are possible areas for future research
Knowledge-based Biomedical Data Science 2019
Knowledge-based biomedical data science (KBDS) involves the design and
implementation of computer systems that act as if they knew about biomedicine.
Such systems depend on formally represented knowledge in computer systems,
often in the form of knowledge graphs. Here we survey the progress in the last
year in systems that use formally represented knowledge to address data science
problems in both clinical and biological domains, as well as on approaches for
creating knowledge graphs. Major themes include the relationships between
knowledge graphs and machine learning, the use of natural language processing,
and the expansion of knowledge-based approaches to novel domains, such as
Chinese Traditional Medicine and biodiversity.Comment: Manuscript 43 pages with 3 tables; Supplemental material 43 pages
with 3 table
Toward a Standardized Strategy of Clinical Metabolomics for the Advancement of Precision Medicine
Despite the tremendous success, pitfalls have been observed in every step of a clinical metabolomics workflow, which impedes the internal validity of the study. Furthermore, the demand for logistics, instrumentations, and computational resources for metabolic phenotyping studies has far exceeded our expectations. In this conceptual review, we will cover inclusive barriers of a metabolomics-based clinical study and suggest potential solutions in the hope of enhancing study robustness, usability, and transferability. The importance of quality assurance and quality control procedures is discussed, followed by a practical rule containing five phases, including two additional "pre-pre-" and "post-post-" analytical steps. Besides, we will elucidate the potential involvement of machine learning and demonstrate that the need for automated data mining algorithms to improve the quality of future research is undeniable. Consequently, we propose a comprehensive metabolomics framework, along with an appropriate checklist refined from current guidelines and our previously published assessment, in the attempt to accurately translate achievements in metabolomics into clinical and epidemiological research. Furthermore, the integration of multifaceted multi-omics approaches with metabolomics as the pillar member is in urgent need. When combining with other social or nutritional factors, we can gather complete omics profiles for a particular disease. Our discussion reflects the current obstacles and potential solutions toward the progressing trend of utilizing metabolomics in clinical research to create the next-generation healthcare system.11Ysciescopu
Interactive Knowledge Construction in the Collaborative Building of an Encyclopedia
International audienceOne of the major challenges of Applied Artificial Intelligence is to provide environments where high level human activities like learning, constructing theories or performing experiments, are enhanced by Artificial Intelligence technologies. This paper starts with the description of an ambitious project: EnCOrE2. The specific real world EnCOrE scenario, significantly representing a much wider class of potential applicative contexts, is dedicated to the building of an Encyclopedia of Organic Chemistry in the context of Virtual Communities of experts and students. Its description is followed by a brief survey of some major AI questions and propositions in relation with the problems raised by the EnCOrE project. The third part of the paper starts with some definitions of a set of âprimitivesâ for rational actions, and then integrates them in a unified conceptual framework for the interactive construction of knowledge. To end with, we sketch out protocols aimed at guiding both the collaborative construction process and the collaborative learning process in the EnCOrE project.The current major result is the emerging conceptual model supporting interaction between human agents and AI tools integrated in Grid services within a socio-constructivist approach, consisting of cycles of deductions, inductions and abductions upon facts (the shared reality) and concepts (their subjective interpretation) submitted to negotiations, and finally converging to a socially validated consensus
Accelerating Science: A Computing Research Agenda
The emergence of "big data" offers unprecedented opportunities for not only
accelerating scientific advances but also enabling new modes of discovery.
Scientific progress in many disciplines is increasingly enabled by our ability
to examine natural phenomena through the computational lens, i.e., using
algorithmic or information processing abstractions of the underlying processes;
and our ability to acquire, share, integrate and analyze disparate types of
data. However, there is a huge gap between our ability to acquire, store, and
process data and our ability to make effective use of the data to advance
discovery. Despite successful automation of routine aspects of data management
and analytics, most elements of the scientific process currently require
considerable human expertise and effort. Accelerating science to keep pace with
the rate of data acquisition and data processing calls for the development of
algorithmic or information processing abstractions, coupled with formal methods
and tools for modeling and simulation of natural processes as well as major
innovations in cognitive tools for scientists, i.e., computational tools that
leverage and extend the reach of human intellect, and partner with humans on a
broad range of tasks in scientific discovery (e.g., identifying, prioritizing
formulating questions, designing, prioritizing and executing experiments
designed to answer a chosen question, drawing inferences and evaluating the
results, and formulating new questions, in a closed-loop fashion). This calls
for concerted research agenda aimed at: Development, analysis, integration,
sharing, and simulation of algorithmic or information processing abstractions
of natural processes, coupled with formal methods and tools for their analyses
and simulation; Innovations in cognitive tools that augment and extend human
intellect and partner with humans in all aspects of science.Comment: Computing Community Consortium (CCC) white paper, 17 page
An Introduction to Ontology
Analytical philosophy of the last one hundred years has been heavily influenced by a doctrine to the effect that one can arrive at a correct ontology by paying attention to certain superficial (syntactic) features of first-order predicate logic as conceived by Frege and Russell. More specifically, it is a doctrine to the effect that the key to the ontological structure of reality is captured syntactically in the âFaâ (or, in more sophisticated versions, in the âRabâ) of first-order logic, where âFâ stands for what is general in reality and âaâ for what is individual. Hence âf(a)ntologyâ. Because predicate logic has exactly two syntactically different kinds of referring expressionsââFâ, âGâ, âRâ, etc., and âaâ, âbâ, âcâ, etc.âso reality must consist of exactly two correspondingly different kinds of entity: the general (properties, concepts) and the particular (things, objects), the relation between these two kinds of entity being revealed in the predicate-argument structure of atomic formulas in first-order logic
Smart Sensor Webs For Environmental Monitoring Integrating Ogc Standards
Sensor webs are the most recent generation of data acquisition systems. The research presented looks at the concept of sensor webs from three perspectives: node, user, and data. These perspectives are different but are nicely complementary, and all extend an enhanced, usually wireless, sensor network. From the node perspective, sensor nodes collaborate in response to environmental phenomena in intelligent ways; this is referred to as the collaborative aspect. From the user perspective, a sensor web makes its sensor nodes and resources accessible via the WWW (World Wide Web); this is referred to as the accessible aspect. From the data perspective, sensor data is annotated with metadata to produce contextual information; this is referred to as the semantic aspect. A prototype that is a sensor web in all three senses has been developed. The prototype demonstrates theability of managing information in different knowledge domains. From the low-level weather data, information about higher-level weather concepts can be inferred and transferred to other knowledge domains, such as specific human activities. This produces an interesting viewpoint of situation awareness in the scope of traditional weather data
Recommended from our members
Computational Toxinology
Venoms are complex mixtures of biological macromolecules and other compounds that are used for predatory and defensive purposes by hundreds of thousands of known species worldwide. Throughout human history, venoms and venom components have been used to treat a vast array of illnesses, causing them to be of great clinical, economic, and academic interest to the drug discovery and toxinology communities. In spite of major computational advances that facilitate data-driven drug discovery, most therapeutic venom effects are still discovered via tedious trial-and-error, or simply by accident. In this dissertation, I describe a body of work that aims to establish a new subdiscipline of translational bioinformatics, which I name âcomputational toxinologyâ.
To accomplish this goal, I present three integrated components that span a wide range of informatics techniques: (1) VenomKB, (2) VenomSeq, and (3) VenomKBâs Semantic API. To provide a platform for structuring, representing, retrieving, and integrating venom data relevant to drug discovery, VenomKB provides a database-backed web application and knowledge base for computational toxinology. VenomKB is structured according to a fully-featured ontology of venoms, and provides data aggregated from many popular web re- sources. VenomSeq is a biotechnology workflow that is designed to generate new high-throughput sequencing data for incorporation into VenomKB. Specifically, we expose human cells to controlled doses of crude venoms, conduct RNA-Sequencing, and build profiles of differential gene expression, which we then compare to publicly-available differential expression data for known dis- eases and drugs with known effects, and use those comparisons to hypothesize ways that the venoms could act in a therapeutic manner, as well. These data are then integrated into VenomKB, where they can be effectively retrieved and evaluated using existing data and known therapeutic associations. VenomKBâs Semantic API further develops this functionality by providing an intelligent, powerful, and user-friendly interface for querying the complex underlying data in VenomKB in a way that reflects the intuitive, human-understandable mean- ing of those data. The Semantic API is designed to cater to the needs of advanced users as well as laypersons and bench scientists without previous expertise in computational biology and semantic data analysis.
In each chapter of the dissertation, I describe how we evaluated these 3 components through various approaches. We demonstrate the utility of VenomKB and the Semantic API by testing a number of practical use-cases for each, designed to highlight their ability to rediscover existing knowledge as well as suggesting potential areas for future exploration. We use statistics and data science techniques to evaluate VenomSeq on 25 diverse species of venomous animals, and propose biologically feasible explanations for significant findings. In evaluating the Semantic API, I show how observations on VenomSeq data can be interpreted and placed into the context of past research by members of the larger toxinology community.
Computational toxinology is a toolbox designed to be used by multiple stakeholders (toxinologists, computational biologists, and systems pharmacologists, among others) to improve the return rate of clinically-significant findings from manual experimentation. It aims to achieve this goal by enabling access to data, providing means for easy validation of results, and suggesting specific hypotheses that are preliminarily supported by rigorous inferential statistics. All components of the research I describe are open-access and publicly available, to improve reproducibility and encourage widespread adoptio
Systematic Analysis of COVID-19 Ontologies
This comprehensive study conducts an in-depth analysis of existing COVID-19
ontologies, scrutinizing their objectives, classifications, design
methodologies, and domain focal points. The study is conducted through a
dual-stage approach, commencing with a systematic review of relevant literature
and followed by an ontological assessment utilizing a parametric methodology.
Through this meticulous process, twenty-four COVID-19 Ontologies (CovOs) are
selected and examined. The findings highlight the scope, intended purpose,
granularity of ontology, modularity, formalism, vocabulary reuse, and extent of
domain coverage. The analysis reveals varying levels of formality in ontology
development, a prevalent preference for utilizing OWL as the representational
language, and diverse approaches to constructing class hierarchies within the
models. Noteworthy is the recurrent reuse of ontologies like OBO models (CIDO,
GO, etc.) alongside CODO. The METHONTOLOGY approach emerges as a favored design
methodology, often coupled with application-based or data-centric evaluation
methods. Our study provides valuable insights for the scientific community and
COVID-19 ontology developers, supplemented by comprehensive ontology metrics.
By meticulously evaluating and documenting COVID-19 information-driven
ontological models, this research offers a comparative cross-domain
perspective, shedding light on knowledge representation variations. The present
study significantly enhances understanding of CovOs, serving as a consolidated
resource for comparative analysis and future development, while also
pinpointing research gaps and domain emphases, thereby guiding the trajectory
of future ontological advancements.Comment: 16 pages, accepted for publication in 17th International Conference
on Metadata and Semantics Research (MTSR2023), University of Milano-Bicocca,
Milan, Italy, October 23-27, 202
- âŠ