1,025 research outputs found

    Archetype development and governance methodologies for the electronic health record

    Full text link
    [ES] La interoperabilidad semántica de la información sanitaria es un requisito imprescindible para la sostenibilidad de la atención sanitaria, y es fundamental para afrontar los nuevos retos sanitarios de un mundo globalizado. Esta tesis aporta nuevas metodologías para abordar algunos de los aspectos fundamentales de la interoperabilidad semántica, específicamente aquellos relacionados con la definición y gobernanza de modelos de información clínica expresados en forma de arquetipo. Las aportaciones de la tesis son: - Estudio de las metodologías de modelado existentes de componentes de interoperabilidad semántica que influirán en la definición de una metodología de modelado de arquetipos. - Análisis comparativo de los sistemas e iniciativas existentes para la gobernanza de modelos de información clínica. - Una propuesta de Metodología de Modelado de Arquetipos unificada que formalice las fases de desarrollo del arquetipo, los participantes requeridos y las buenas prácticas a seguir. - Identificación y definición de principios y características de gobernanza de arquetipos. - Diseño y desarrollo de herramientas que brinden soporte al modelado y la gobernanza de arquetipos. Las aportaciones de esta tesis se han puesto en práctica en múltiples proyectos y experiencias de desarrollo. Estas experiencias varían desde un proyecto local dentro de una sola organización que requirió la reutilización de datos clínicos basados en principios de interoperabilidad semántica, hasta el desarrollo de proyectos de historia clínica electrónica de alcance nacional.[CA] La interoperabilitat semàntica de la informació sanitària és un requisit imprescindible per a la sostenibilitat de l'atenció sanitària, i és fonamental per a afrontar els nous reptes sanitaris d'un món globalitzat. Aquesta tesi aporta noves metodologies per a abordar alguns dels aspectes fonamentals de la interoperabilitat semàntica, específicament aquells relacionats amb la definició i govern de models d'informació clínica expressats en forma d'arquetip. Les aportacions de la tesi són: - Estudi de les metodologies de modelatge existents de components d'interoperabilitat semàntica que influiran en la definició d'una metodologia de modelatge d'arquetips. - Anàlisi comparativa dels sistemes i iniciatives existents per al govern de models d'informació clínica. - Una proposta de Metodologia de Modelatge d'Arquetips unificada que formalitza les fases de desenvolupament de l'arquetip, els participants requerits i les bones pràctiques a seguir. - Identificació i definició de principis i característiques de govern d'arquetips. - Disseny i desenvolupament d'eines que brinden suport al modelatge i al govern d'arquetips. Les aportacions d'aquesta tesi s'han posat en pràctica en múltiples projectes i experiències de desenvolupament. Aquestes experiències varien des d'un projecte local dins d'una sola organització que va requerir la reutilització de dades clíniques basades en principis d'interoperabilitat semàntica, fins al desenvolupament de projectes d'història clínica electrònica d'abast nacional.[EN] Semantic interoperability of health information is an essential requirement for the sustainability of healthcare, and it is essential to face the new health challenges of a globalized world. This thesis provides new methodologies to tackle some of the fundamental aspects of semantic interoperability, specifically those aspects related to the definition and governance of clinical information models expressed in the form of archetypes. The contributions of the thesis are: - Study of existing modeling methodologies of semantic interoperability components that will influence in the definition of an archetype modeling methodology. - Comparative analysis of existing clinical information model governance systems and initiatives. - A proposal of a unified Archetype Modeling Methodology that formalizes the phases of archetype development, the required participants, and the good practices to be followed. - Identification and definition of archetype governance principles and characteristics. - Design and development of tools that provide support to archetype modeling and governance. The contributions of this thesis have been put into practice in multiple projects and development experiences. These experiences vary from a local project inside a single organization that required a reuse on clinical data based on semantic interoperability principles, to the development of national electronic health record projects.This thesis was partially funded by the Ministerio de Economía y Competitividad, ayudas para contratos para la formación de doctores en empresas “Doctorados Industriales”, grant DI-14-06564 and by the Agencia Valenciana de la Innovación, ayudas del Programa de Promoción del Talento – Doctorados empresariales (INNODOCTO), grant INNTA3/2020/12.Moner Cano, D. (2021). Archetype development and governance methodologies for the electronic health record [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16491

    The Foundational Model of Anatomy Ontology

    Get PDF
    Anatomy is the structure of biological organisms. The term also denotes the scientific discipline devoted to the study of anatomical entities and the structural and developmental relations that obtain among these entities during the lifespan of an organism. Anatomical entities are the independent continuants of biomedical reality on which physiological and disease processes depend, and which, in response to etiological agents, can transform themselves into pathological entities. For these reasons, hard copy and in silico information resources in virtually all fields of biology and medicine, as a rule, make extensive reference to anatomical entities. Because of the lack of a generalizable, computable representation of anatomy, developers of computable terminologies and ontologies in clinical medicine and biomedical research represented anatomy from their own more or less divergent viewpoints. The resulting heterogeneity presents a formidable impediment to correlating human anatomy not only across computational resources but also with the anatomy of model organisms used in biomedical experimentation. The Foundational Model of Anatomy (FMA) is being developed to fill the need for a generalizable anatomy ontology, which can be used and adapted by any computer-based application that requires anatomical information. Moreover it is evolving into a standard reference for divergent views of anatomy and a template for representing the anatomy of animals. A distinction is made between the FMA ontology as a theory of anatomy and the implementation of this theory as the FMA artifact. In either sense of the term, the FMA is a spatial-structural ontology of the entities and relations which together form the phenotypic structure of the human organism at all biologically salient levels of granularity. Making use of explicit ontological principles and sound methods, it is designed to be understandable by human beings and navigable by computers. The FMA’s ontological structure provides for machine-based inference, enabling powerful computational tools of the future to reason with biomedical data

    DETAILED CLINICAL MODELS AND THEIR RELATION WITH ELECTRONIC HEALTH RECORDS

    Full text link
    Tesis por compendio[EN] Healthcare domain produces and consumes big quantities of people's health data. Although data exchange is the norm rather than the exception, being able to access to all patient data is still far from achieved. Current developments such as personal health records will introduce even more data and complexity to the Electronic Health Records (EHR). Achieving semantic interoperability is one of the biggest challenges to overcome in order to benefit from all the information contained in the distributed EHR. This requires that the semantics of the information can be understood by all involved parties. It has been stablished that three layers are needed to achieve semantic interoperability: Reference models, clinical models (archetypes), and clinical terminologies. As seen in the literature, information models (reference models and clinical models) are lacking methodologies and tools to improve EHR systems and to develop new systems that can be semantically interoperable. The purpose of this thesis is to provide methodologies and tools for advancing the use of archetypes in three different scenarios: - Archetype definition over specifications with no dual model architecture native support. Any EHR architecture that directly or indirectly has the notion of detailed clinical models (such as HL7 CDA templates) can be potentially used as a reference model for archetype definition. This allows transforming single-model architectures (which contain only a reference model) into dual-model architectures (reference model with archetypes). A set of methodologies and tools has been developed to support the definition of archetypes from multiple reference models. - Data transformation. A complete methodology and tools are proposed to deal with the transformation of legacy data into XML documents compliant with the archetype and the underlying reference model. If the reference model is a standard then the transformation is a standardization process. The methodologies and tools allow both the transformation of legacy data and the transformation of data between different EHR standards. - Automatic generation of implementation guides and reference materials from archetypes. A methodology for the automatic generation of a set of reference materials is provided. These materials are useful for the development and use of EHR systems. These reference materials include data validators, example instances, implementation guides, human-readable formal rules, sample forms, mindmaps, etc. These reference materials can be combined and organized in different ways to adapt to different types of users (clinical or information technology staff). This way, users can include the detailed clinical model in their organization workflow and cooperate in the model definition. These methodologies and tools put clinical models as a key part of the system. The set of presented methodologies and tools ease the achievement of semantic interoperability by providing means for the semantic description, normalization, and validation of existing and new systems.[ES] El sector sanitario produce y consume una gran cantidad de datos sobre la salud de las personas. La necesidad de intercambiar esta información es una norma más que una excepción, aunque este objetivo está lejos de ser alcanzado. Actualmente estamos viviendo avances como la medicina personalizada que incrementarán aún más el tamaño y complejidad de la Historia Clínica Electrónica (HCE). La consecución de altos grados de interoperabilidad semántica es uno de los principales retos para aprovechar al máximo toda la información contenida en las HCEs. Esto a su vez requiere una representación fiel de la información de tal forma que asegure la consistencia de su significado entre todos los agentes involucrados. Actualmente está reconocido que para la representación del significado clínico necesitamos tres tipos de artefactos: modelos de referencia, modelos clínicos (arquetipos) y terminologías. En el caso concreto de los modelos de información (modelos de referencia y modelos clínicos) se observa en la literatura una falta de metodologías y herramientas que faciliten su uso tanto para la mejora de sistemas de HCE ya existentes como en el desarrollo de nuevos sistemas con altos niveles de interoperabilidad semántica. Esta tesis tiene como propósito proporcionar metodologías y herramientas para el uso avanzado de arquetipos en tres escenarios diferentes: - Definición de arquetipos sobre especificaciones sin soporte nativo al modelo dual. Cualquier arquitectura de HCE que posea directa o indirectamente la noción de modelos clínicos detallados (por ejemplo, las plantillas en HL7 CDA) puede ser potencialmente usada como modelo de referencia para la definición de arquetipos. Con esto se consigue transformar arquitecturas de HCE de modelo único (solo con modelo de referencia) en arquitecturas de doble modelo (modelo de referencia + arquetipos). Se han desarrollado metodologías y herramientas que faciliten a los editores de arquetipos el soporte a múltiples modelos de referencia. - Transformación de datos. Se propone una metodología y herramientas para la transformación de datos ya existentes a documentos XML conformes con los arquetipos y el modelo de referencia subyacente. Si el modelo de referencia es un estándar entonces la transformación será un proceso de estandarización de datos. La metodología y herramientas permiten tanto la transformación de datos no estandarizados como la transformación de datos entre diferentes estándares. - Generación automática de guías de implementación y artefactos procesables a partir de arquetipos. Se aporta una metodología para la generación automática de un conjunto de materiales de referencia de utilidad en el desarrollo y uso de sistemas de HCE, concretamente validadores de datos, instancias de ejemplo, guías de implementación , reglas formales legibles por humanos, formularios de ejemplo, mindmaps, etc. Estos materiales pueden ser combinados y organizados de diferentes modos para facilitar que los diferentes tipos de usuarios (clínicos, técnicos) puedan incluir los modelos clínicos detallados en el flujo de trabajo de su sistema y colaborar en su definición. Estas metodologías y herramientas ponen los modelos clínicos como una parte clave en el sistema. El conjunto de las metodologías y herramientas presentadas facilitan la consecución de la interoperabilidad semántica al proveer medios para la descripción semántica, normalización y validación tanto de sistemas nuevos como ya existentes.[CA] El sector sanitari produeix i consumeix una gran quantitat de dades sobre la salut de les persones. La necessitat d'intercanviar aquesta informació és una norma més que una excepció, encara que aquest objectiu està lluny de ser aconseguit. Actualment estem vivint avanços com la medicina personalitzada que incrementaran encara més la grandària i complexitat de la Història Clínica Electrònica (HCE). La consecució d'alts graus d'interoperabilitat semàntica és un dels principals reptes per a aprofitar al màxim tota la informació continguda en les HCEs. Açò, per la seua banda, requereix una representació fidel de la informació de tal forma que assegure la consistència del seu significat entre tots els agents involucrats. Actualment està reconegut que per a la representació del significat clínic necessitem tres tipus d'artefactes: models de referència, models clínics (arquetips) i terminologies. En el cas concret dels models d'informació (models de referència i models clínics) s'observa en la literatura una mancança de metodologies i eines que en faciliten l'ús tant per a la millora de sistemes de HCE ja existents com per al desenvolupament de nous sistemes amb alts nivells d'interoperabilitat semàntica. Aquesta tesi té com a propòsit proporcionar metodologies i eines per a l'ús avançat d'arquetips en tres escenaris diferents: - Definició d'arquetips sobre especificacions sense suport natiu al model dual. Qualsevol arquitectura de HCE que posseïsca directa o indirectament la noció de models clínics detallats (per exemple, les plantilles en HL7 CDA) pot ser potencialment usada com a model de referència per a la definició d'arquetips. Amb açò s'aconsegueix transformar arquitectures de HCE de model únic (solament amb model de referència) en arquitectures de doble model (model de referència + arquetips). S'han desenvolupat metodologies i eines que faciliten als editors d'arquetips el suport a múltiples models de referència. - Transformació de dades. Es proposa una metodologia i eines per a la transformació de dades ja existents a documents XML conformes amb els arquetips i el model de referència subjacent. Si el model de referència és un estàndard llavors la transformació serà un procés d'estandardització de dades. La metodologia i eines permeten tant la transformació de dades no estandarditzades com la transformació de dades entre diferents estàndards. - Generació automàtica de guies d'implementació i artefactes processables a partir d'arquetips. S'hi inclou una metodologia per a la generació automàtica d'un conjunt de materials de referència d'utilitat en el desenvolupament i ús de sistemes de HCE, concretament validadors de dades, instàncies d'exemple, guies d'implementació, regles formals llegibles per humans, formularis d'exemple, mapes mentals, etc. Aquests materials poden ser combinats i organitzats de diferents maneres per a facilitar que els diferents tipus d'usuaris (clínics, tècnics) puguen incloure els models clínics detallats en el flux de treball del seu sistema i col·laborar en la seua definició. Aquestes metodologies i eines posen els models clínics com una part clau del sistemes. El conjunt de les metodologies i eines presentades faciliten la consecució de la interoperabilitat semàntica en proveir mitjans per a la seua descripció semàntica, normalització i validació tant de sistemes nous com ja existents.Boscá Tomás, D. (2016). DETAILED CLINICAL MODELS AND THEIR RELATION WITH ELECTRONIC HEALTH RECORDS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62174TESISCompendi

    ExaCT: automatic extraction of clinical trial characteristics from journal publications

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Clinical trials are one of the most important sources of evidence for guiding evidence-based practice and the design of new trials. However, most of this information is available only in free text - e.g., in journal publications - which is labour intensive to process for systematic reviews, meta-analyses, and other evidence synthesis studies. This paper presents an automatic information extraction system, called ExaCT, that assists users with locating and extracting key trial characteristics (e.g., eligibility criteria, sample size, drug dosage, primary outcomes) from full-text journal articles reporting on randomized controlled trials (RCTs).</p> <p>Methods</p> <p>ExaCT consists of two parts: an information extraction (IE) engine that searches the article for text fragments that best describe the trial characteristics, and a web browser-based user interface that allows human reviewers to assess and modify the suggested selections. The IE engine uses a statistical text classifier to locate those sentences that have the highest probability of describing a trial characteristic. Then, the IE engine's second stage applies simple rules to these sentences to extract text fragments containing the target answer. The same approach is used for all 21 trial characteristics selected for this study.</p> <p>Results</p> <p>We evaluated ExaCT using 50 previously unseen articles describing RCTs. The text classifier (<it>first stage</it>) was able to recover 88% of relevant sentences among its top five candidates (top5 recall) with the topmost candidate being relevant in 80% of cases (top1 precision). Precision and recall of the extraction rules (<it>second stage</it>) were 93% and 91%, respectively. Together, the two stages of the extraction engine were able to provide (partially) correct solutions in 992 out of 1050 test tasks (94%), with a majority of these (696) representing fully correct and complete answers.</p> <p>Conclusions</p> <p>Our experiments confirmed the applicability and efficacy of ExaCT. Furthermore, they demonstrated that combining a statistical method with 'weak' extraction rules can identify a variety of study characteristics. The system is flexible and can be extended to handle other characteristics and document types (e.g., study protocols).</p

    Pedagogically-driven Ontology Network for Conceptualizing the e-Learning Assessment Domain

    Get PDF
    The use of ontologies as tools to guide the generation, organization and personalization of e-learning content, including e-assessment, has drawn attention of the researchers because ontologies can represent the knowledge of a given domain and researchers use the ontology to reason about it. Although the use of these semantic technologies tends to enhance technology-based educational processes, the lack of validation to improve the quality of learning in their use makes the educator feel reluctant to use them. This paper presents progress in the development of an ontology network, called AONet, that conceptualizes the e-assessment domain with the aim of supporting the semi-automatic generation of assessment, taking into account not only technical aspects but also pedagogical ones.Fil: Romero, Lucila. Universidad Nacional del Litoral; ArgentinaFil: North, Matthew. The college of Idabo; Estados UnidosFil: Gutierrez, Milagros. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Centro de Investigación y Desarrollo de Ingeniería en Sistemas de Información; ArgentinaFil: Caliusco, Maria Laura. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Centro de Investigación y Desarrollo de Ingeniería en Sistemas de Información; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe; Argentin

    OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The OpenTox Framework, developed by the partners in the OpenTox project (<url>http://www.opentox.org</url>), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing.</p> <p>Results</p> <p>The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in <it>in vivo</it> studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.</p> <p>OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.</p> <p>The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists).</p> <p>Availability</p> <p>The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page <url>http://www.opentox.org/dev/ontology</url>; the OpenTox ontology is available as OWL at <url>http://opentox.org/api/1 1/opentox.owl</url>, the ToxML - OWL conversion utility is an open source resource available at <url>http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/</url></p

    The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence

    Full text link
    Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.Comment: 35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discover

    Developing an electronic health record (EHR) for methadone treatment recording and decision support

    Get PDF
    Background: in this paper, we give an overview of methadone treatment in Ireland and outline the rationale for designing an electronic health record (EHR) with extensibility, interoperability and decision support functionality. Incorporating several international standards, a conceptual model applying a problem orientated approach in a hierarchical structure has been proposed for building the EHR.Methods: a set of archetypes has been designed in line with the current best practice and clinical guidelines which guide the information-gathering process. A web-based data entry system has been implemented, incorporating elements of the paper-based prescription form, while at the same time facilitating the decision support function.Results: the use of archetypes was found to capture the ever changing requirements in the healthcare domain and externalises them in constrained data structures. The solution is extensible enabling the EHR to cover medicine management in general as per the programme of the HRB Centre for Primary Care Research.Conclusions: the data collected via this Irish system can be aggregated into a larger dataset, if necessary, for analysis and evidence-gathering, since we adopted the openEHR standard. It will be later extended to include the functionalities of prescribing drugs other than methadone along with the research agenda at the HRB Centre for Primary Care Research in Irelan

    Designing a Controlled Medical Vocabulary Server: The VOSER Project

    Get PDF
    journal articleBiomedical Informatic

    Reconciliation of Multiple Guidelines for Decision Support: A case study on the multidisciplinary management of breast cancer within the DESIREE project

    Get PDF
    Breast cancer is the most common cancer among women. DESIREE is a European project which aims at developing web-based services for the management of primary breast cancer by multidisciplinary breast units (BUs). We describe the guideline-based decision support system (GL-DSS) of the project. Various breast cancer clinical practice guidelines (CPGs) have been selected to be concurrently applied to provide state-of-the-art patient-specific recommendations. The aim is to reconcile CPG recommendations with the objective of complementarity to enlarge the number of clinical situations covered by the GL-DSS. Input and output data exchange with the GL-DSS is performed using FHIR. We used a knowledge model of the domain as an ontology on which relies the reasoning process performed by rules that encode the selected CPGs. Semantic web tools were used, notably the Euler/EYE inference engine, to implement the GL-DSS. "Rainbow boxes" are a synthetic tabular display used to visualize the inferred recommendations
    corecore