66 research outputs found

    An ontology for formal representation of medication adherence-related knowledge : case study in breast cancer

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Medication non-adherence is a major healthcare problem that negatively impacts the health and productivity of individuals and society as a whole. Reasons for medication non-adherence are multi-faced, with no clear-cut solution. Adherence to medication remains a difficult area to study, due to inconsistencies in representing medicationadherence behavior data that poses a challenge to humans and today’s computer technology related to interpreting and synthesizing such complex information. Developing a consistent conceptual framework to medication adherence is needed to facilitate domain understanding, sharing, and communicating, as well as enabling researchers to formally compare the findings of studies in systematic reviews. The goal of this research is to create a common language that bridges human and computer technology by developing a controlled structured vocabulary of medication adherence behavior—“Medication Adherence Behavior Ontology” (MAB-Ontology) using breast cancer as a case study to inform and evaluate the proposed ontology and demonstrating its application to real-world situation. The intention is for MAB-Ontology to be developed against the background of a philosophical analysis of terms, such as belief, and desire to be human, computer-understandable, and interoperable with other systems that support scientific research. The design process for MAB-Ontology carried out using the METHONTOLOGY method incorporated with the Basic Formal Ontology (BFO) principles of best practice. This approach introduces a novel knowledge acquisition step that guides capturing medication-adherence-related data from different knowledge sources, including adherence assessment, adherence determinants, adherence theories, adherence taxonomies, and tacit knowledge source types. These sources were analyzed using a systematic approach that involved some questions applied to all source types to guide data extraction and inform domain conceptualization. A set of intermediate representations involving tables and graphs was used to allow for domain evaluation before implementation. The resulting ontology included 629 classes, 529 individuals, 51 object property, and 2 data property. The intermediate representation was formalized into OWL using ProtĂ©gĂ©. The MAB-Ontology was evaluated through competency questions, use-case scenario, face validity and was found to satisfy the requirement specification. This study provides a unified method for developing a computerized-based adherence model that can be applied among various disease groups and different drug categories

    The Volcanism Ontology (VO): Semantic Modeling of Volcanic Eruptions and Volcanoes

    Get PDF
    We have modeled the complex material and process entities and properties of volcanic eruptions and structures in the Volcanism Ontology (VO) applying several top-level ontologies such as Basic Formal Ontology (BFO), SWEET, and Ontology of Physics for Biology (OPB) within a single framework. The upper level framework separates the entities into investigative and volcanism entities, and each further separated into their own BFO framework. This allows for VO to be one of the best steps towards a domain ontology to cover the complexity found in volcanology. When deployed on the web, VO will be used to explicitly and formally annotate data and information collected by volcanologists based on domain knowledge. This will enable the integration of global volcanic data and improve the interoperability of software that deal with such data

    Evolving and sustaining ocean best practices and standards for the next decade

    Get PDF
    The oceans play a key role in global issues such as climate change, food security, and human health. Given their vast dimensions and internal complexity, efficient monitoring and predicting of the planet's ocean must be a collaborative effort of both regional and global scale. A first and foremost requirement for such collaborative ocean observing is the need to follow well-defined and reproducible methods across activities: from strategies for structuring observing systems, sensor deployment and usage, and the generation of data and information products, to ethical and governance aspects when executing ocean observing. To meet the urgent, planet-wide challenges we face, methods across all aspects of ocean observing should be broadly adopted by the ocean community and, where appropriate, should evolve into "Ocean Best Practices." While many groups have created best practices, they are scattered across the Web or buried in local repositories and many have yet to be digitized. To reduce this fragmentation, we introduce a new open access, permanent, digital repository of best practices documentation (oceanbestpractices.org) that is part of the Ocean Best Practices System (OBPS). The new OBPS provides an opportunity space for the centralized and coordinated improvement of ocean observing methods. The OBPS repository employs user-friendly software to significantly improve discovery and access to methods. The software includes advanced semantic technologies for search capabilities to enhance repository operations. In addition to the repository, the OBPS also includes a peer reviewed journal research topic, a forum for community discussion and a training activity for use of best practices. Together, these components serve to realize a core objective of the OBPS, which is to enable the ocean community to create superior methods for every activity in ocean observing from research to operations to applications that are agreed upon and broadly adopted across communities. Using selected ocean observing examples, we show how the OBPS supports this objective. This paper lays out a future vision of ocean best practices and how OBPS will contribute to improving ocean observing in the decade to come

    Ohjaamaton koneoppiminen tapahtumakategorisoinnissa liiketoimintatiedon hyödyntÀmisessÀ

    Get PDF
    The data and information available for business intelligence purposes is increasing rapidly in the world. Data quality and quantity are important for making the correct business decisions, but the amount of data is becoming difficult to process. Different machine learning methods are becoming an increasingly powerful tool to deal with the amount of data. One such machine learning approach is the automatic annotation and location of business intelligence relevant actions and events in news data. While studying the literature of this field, it however became clear, that there exists little standardization and objectivity regarding what types of categories these events and actions are sorted into. This was often done in subjective, arduous manners. The goal of this thesis is to provide information and recommendations on how to create more objective, less time consuming initial categorizations of actions and events by studying different common unsupervised learning methods for this task. The relevant literature and theory to understand the followed research and methodology is studied. The context and evolution of business intelligence to today is considered, and specially its relationship to the big data problem of today is studied. This again relates to the fields of machine learning, artificial intelligence, and especially natural language programming. The relevant methods of these fields are covered to understand the taken steps to achieve the goal of this thesis. All approaches aided in understanding the behaviour of unsupervised learning methods, and how it should taken into account in the categorization creation. Different natural language preprocessing steps are combined with different text vectorization methods. Specifically, three different text tokenization methods, plain, N-gram, and chunk tokenizations are tested with two popular vectorization methods: bag-of-words and term frequency inverse document frequency vectorizations. Two types of unsupervised methods are tested for these vectorizations: Clustering is a more traditional data subcategorization process, and topic modelling is a fuzzy, probability based method for the same task. Out of both learning methods, three different algorithms are studied by the interpretability and categorization value of their top cluster or topic representative terms. The top term representations are also compared to the true contents of these topics or clusters via content analysis. Out of the studied methods, plain and chunk tokenization methods yielded the most comprehensible results to a human reader. Vectorization made no major difference regarding top term interpretability or contents and top term congruence. Out of the methods studied, K-means clustering and Latent Dirichlet Allocation were deemed the most useful in event and action categorization creation. K-means clustering created a good basis for an initial categorization framework with congruent result top terms to the contents of the clusters, and Latent Dirichlet Allocation found latent topics in the text documents that provided serendipitous, fruitful insights for a category creator to take into account

    Proceedings Ocean Biodiversity Informatics: International Conference on Marine Biodiversity Data Management, Hamburg, Germany 29 November to 1 December, 2004

    Get PDF
    The International conference on Marine Biodiversity Data management ‘Ocean Biodiversity Informatics’ was held in Hamburg, Germany, from 29 November to 1 December 2004. Its objective was to offer a forum to marine biological data managers to discuss the state of the field, and to exchange ideas on how to further develop marine biological data systems. Many marine biologists are actively gathering knowledge, as they have been doing for a long time. What is new is that many of these scientists are willing to share their knowledge, including basic data, with others over the Internet. Our challenge now is to try and manage this trend, avoid confusing users with a multitude of contradicting sources of information, and make sure different data systems can be and are effectively integrated

    A MATTER OF STYLE.HOW MAP THINKING AND BIO-ONTOLOGIES SHAPE CONTEMPORARY MOLECULAR RESEARCH

    Get PDF
    ABSTRACT The aim of this thesis is to provide an epistemic analysis of the transformations occurring in contemporary biological research by considering the relation between molecular biology and computational biology. In particular, I will focus on bio-ontologies, as the tool which incarnates at best the new face of biomedical research. Such a choice is not arbitrary. By appealing to the notion of style of reasoning and way of knowing, I will show that bio-ontologies exemplify the rise and success of map thinking as the signature of a new way of doing molecular biology, while the theoretical tenets, established more than 30 years ago, still maintain their epistemic prominence. This is neither to say that experimentalism will disappear from science, nor that the experiments power will be diminished but rather that experiments will have a new role in the architecture of scientific efforts, precisely because of the increasing importance of classificatory approaches. Therefore, such a transition within biomedical research is indeed radical and profound but it does not involve paradigm shifts but rather a change in the practice. In this sense, it is a matter of style

    Evolving and sustaining ocean best practices and standards for the next decade

    Get PDF
    The oceans play a key role in global issues such as climate change, food security, and human health. Given their vast dimensions and internal complexity, efficient monitoring and predicting of the planet’s ocean must be a collaborative effort of both regional and global scale. A first and foremost requirement for such collaborative ocean observing is the need to follow well-defined and reproducible methods across activities: from strategies for structuring observing systems, sensor deployment and usage, and the generation of data and information products, to ethical and governance aspects when executing ocean observing. To meet the urgent, planet-wide challenges we face, methods across all aspects of ocean observing should be broadly adopted by the ocean community and, where appropriate, should evolve into “Ocean Best Practices.” While many groups have created best practices, they are scattered across the Web or buried in local repositories and many have yet to be digitized. To reduce this fragmentation, we introduce a new open access, permanent, digital repository of best practices documentation (oceanbestpractices.org) that is part of the Ocean Best Practices System (OBPS). The new OBPS provides an opportunity space for the centralized and coordinated improvement of ocean observing methods. The OBPS repository employs user-friendly software to significantly improve discovery and access to methods. The software includes advanced semantic technologies for search capabilities to enhance repository operations. In addition to the repository, the OBPS also includes a peer reviewed journal research topic, a forum for community discussion and a training activity for use of best practices. Together, these components serve to realize a core objective of the OBPS, which is to enable the ocean community to create superior methods for every activity in ocean observing from research to operations to applications that are agreed upon and broadly adopted across communities. Using selected ocean observing examples, we show how the OBPS supports this objective. This paper lays out a future vision of ocean best practices and how OBPS will contribute to improving ocean observing in the decade to come

    Data Journeys in the Sciences

    Get PDF
    This groundbreaking, open access volume analyses and compares data practices across several fields through the analysis of specific cases of data journeys. It brings together leading scholars in the philosophy, history and social studies of science to achieve two goals: tracking the travel of data across different spaces, times and domains of research practice; and documenting how such journeys affect the use of data as evidence and the knowledge being produced. The volume captures the opportunities, challenges and concerns involved in making data move from the sites in which they are originally produced to sites where they can be integrated with other data, analysed and re-used for a variety of purposes. The in-depth study of data journeys provides the necessary ground to examine disciplinary, geographical and historical differences and similarities in data management, processing and interpretation, thus identifying the key conditions of possibility for the widespread data sharing associated with Big and Open Data. The chapters are ordered in sections that broadly correspond to different stages of the journeys of data, from their generation to the legitimisation of their use for specific purposes. Additionally, the preface to the volume provides a variety of alternative “roadmaps” aimed to serve the different interests and entry points of readers; and the introduction provides a substantive overview of what data journeys can teach about the methods and epistemology of research
    • 

    corecore