158 research outputs found

    Osimani B., Poellinger R. (2020) A Protocol for Model Validation and Causal Inference from Computer Simulation. In: Bertolaso M., Sterpetti F. (eds) A Critical Reflection on Automated Science. Human Perspectives in Health Sciences and Technology, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-030-25001-0_9

    Get PDF
    The philosophical literature on modelling is increasingly vast, however clear formal analyses of computational modelling in systems biology are still lacking. We present a general, theoretical scheme which (i) visualizes the development and repeated refinement of a computer simulation, (ii) explicates the relation between different key concepts in modelling and simulation, and (iii) facilitates tracing the epistemological dynamics of model validation. To illustrate and motivate our conceptual scheme, we analyse a case study, the discovery of the functional properties of a specific protein, E-cadherin, which seems to have a key role in metastatic processes by way of influencing cell growth and proliferation signalling. To this end we distinguish two types of causal claims inferred from a computer simulation: (i) causal claims as plain combinations of basic rules (capturing the causal interplay of atomic behaviour) and (ii) causal claims on the level of emergent phenomena (tracing population dynamics). In formulating a protocol for model validation and causal inference, we show how, although such macro-level phenomena cannot be subjected to direct causal tests qua intervention (as, e.g., formulated in interventionist causal theories), they possibly suggest further manipulation tests at the basic micro-level. We thereby elucidate the micro-macro-level interaction in systems biology

    BIOMEDICAL ONTOLOGIES: EXAMINING ASPECTS OF INTEGRATION ACROSS BREAST CANCER KNOWLEDGE DOMAINS

    Get PDF
    The key ideas developed in this thesis lie at the intersection of epistemology, philosophy of molecular biology, medicine, and computer science. I examine how the epistemic and pragmatic needs of agents distributed across particular scientific disciplines influence the domain-specific reasoning, classification, and representation of breast cancer. The motivation to undertake an interdisciplinary approach, while addressing the problems of knowledge integration, originates in the peculiarity of the integrative endeavour of sciences that is fostered by information technologies and ontology engineering methods. I analyse what knowledge integration in this new field means and how it is possible to integrate diverse knowledge domains, such as clinical and molecular. I examine the extent and character of the integration achieved through the application of biomedical ontologies. While particular disciplines target certain aspects of breast cancer-related phenomena, biomedical ontologies target biomedical knowledge about phenomena that is often captured within diverse classificatory systems and domain-specific representations. In order to integrate dispersed pieces of knowledge, which is distributed across assorted research domains and knowledgebases, ontology engineers need to deal with the heterogeneity of terminological, conceptual, and practical aims that are not always shared among the domains. Accordingly, I analyse the specificities, similarities, and diversities across the clinical and biomedical domain conceptualisations and classifications of breast cancer. Instead of favouring a unifying approach to knowledge integration, my analysis shows that heterogeneous classifications and representations originate from different epistemic and pragmatic needs, each of which brings a fruitful insight into the problem. Thus, while embracing a pluralistic view on the ontologies that are capturing various aspects of knowledge, I argue that the resulting integration should be understood in terms of a coordinated social effort to bring knowledge together as needed and when needed, rather than in terms of a unity that represents domain-specific knowledge in a uniform manner. Furthermore, I characterise biomedical ontologies and knowledgebases as a novel socio-technological medium that allows representational interoperability across the domains. As an example, which also marks my own contribution to the collaborative efforts, I present an ontology for HER2+ breast cancer phenotypes that integrates clinical and molecular knowledge in an explicit way. Through this and a number of other examples, I specify how biomedical ontologies support a mutual enrichment of knowledge across the domains, thereby enabling the application of molecular knowledge into the clinics

    BIOMEDICAL LANGUAGE UNDERSTANDING AND EXTRACTION (BLUE-TEXT): A MINIMAL SYNTACTIC, SEMANTIC METHOD

    Get PDF
    Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting

    31th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    Information modelling is becoming more and more important topic for researchers, designers, and users of information systems.The amount and complexity of information itself, the number of abstractionlevels of information, and the size of databases and knowledge bases arecontinuously growing. Conceptual modelling is one of the sub-areas ofinformation modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers

    Explanatory Models, Unit Standards, and Personalized Learning in Educational Measurement

    Get PDF
    The papers by Jack Stenner included in this book document the technical details of an art and science of measurement that creates new entrepreneurial business opportunities. Jack brought theory, instruments, and data together in ways that are applicable not only in the context of a given test of reading or mathematics ability, but which more importantly catalyzed literacy and numeracy capital in new fungible expressions. Though Jack did not reflect in writing on the inferential, constructive processes in which he engaged, much can be learned by reviewing his work with his accomplishments in mind. A Foreword by Stenner's colleague and co-author on multiple works, William P. Fisher, Jr., provides key clues concerning (a) how Jack's understanding of measurement and its values aligns with social and historical studies of science and technology, and (b) how recent developments in collaborations of psychometricians and metrologists are building on and expanding Jack's accomplishments. ​ This is an open access book

    Models and Modelling between Digital and Humanities: A Multidisciplinary Perspective

    Get PDF
    This Supplement of Historical Social Research stems from the contributions on the topic of modelling presented at the workshop “Thinking in Practice”, held at Wahn Manor House in Cologne on January 19-20, 2017. With Digital Humanities as starting point, practical examples of model building from different disciplines are considered, with the aim of contributing to the dialogue on modelling from several perspectives. Combined with theoretical considerations, this collection illustrates how the process of modelling is one of coming to know, in which the purpose of each modelling activity and the form in which models are expressed has to be taken into consideration in tandem. The modelling processes presented in this volume belong to specific traditions of scholarly and practical thinking as well as to specific contexts of production and use of models. The claim that supported the project workshop was indeed that establishing connections between different traditions of and approaches toward modelling is vital, whether these connections are complementary or intersectional. The workshop proceedings address an underpinning goal of the research project itself, namely that of examining the nature of the epistemological questions in the different traditions and how they relate to the nature of the modelled objects and the models being created. This collection is an attempt to move beyond simple representational views on modelling in order to understand modelling processes as scholarly and cultural phenomena as such

    Extensibility of Enterprise Modelling Languages

    Get PDF
    Die Arbeit adressiert insgesamt drei Forschungsschwerpunkte. Der erste Schwerpunkt setzt sich mit zu entwickelnden BPMN-Erweiterungen auseinander und stellt deren methodische Implikationen im Rahmen der bestehenden Sprachstandards dar. Dies umfasst zum einen ganz konkrete Spracherweiterungen wie z. B. BPMN4CP, eine BPMN-Erweiterung zur multi-perspektivischen Modellierung von klinischen Behandlungspfaden. Zum anderen betrifft dieser Teil auch modellierungsmethodische Konsequenzen, um parallel sowohl die zugrunde liegende Sprache (d. h. das BPMN-Metamodell) als auch die Methode zur Erweiterungsentwicklung zu verbessern und somit den festgestellten Unzulänglichkeiten zu begegnen. Der zweite Schwerpunkt adressiert die Untersuchung von sprachunabhängigen Fragen der Erweiterbarkeit, welche sich entweder während der Bearbeitung des ersten Teils ergeben haben oder aus dessen Ergebnissen induktiv geschlossen wurden. Der Forschungsschwerpunkt fokussiert dabei insbesondere eine Konsolidierung bestehender Terminologien, die Beschreibung generisch anwendbarer Erweiterungsmechanismen sowie die nutzerorientierte Analyse eines potentiellen Erweiterungsbedarfs. Dieser Teil bereitet somit die Entwicklung einer generischen Erweiterungsmethode grundlegend vor. Hierzu zählt auch die fundamentale Auseinandersetzung mit Unternehmensmodellierungssprachen generell, da nur eine ganzheitliche, widerspruchsfreie und integrierte Sprachdefinition Erweiterungen überhaupt ermöglichen und gelingen lassen kann. Dies betrifft beispielsweise die Spezifikation der intendierten Semantik einer Sprache
    corecore