1,872 research outputs found

    Joining up health and bioinformatics: e-science meets e-health

    Get PDF
    CLEF (Co-operative Clinical e-Science Framework) is an MRC sponsored project in the e-Science programme that aims to establish methodologies and a technical infrastructure forthe next generation of integrated clinical and bioscience research. It is developing methodsfor managing and using pseudonymised repositories of the long-term patient histories whichcan be linked to genetic, genomic information or used to support patient care. CLEF concentrateson removing key barriers to managing such repositories ? ethical issues, informationcapture, integration of disparate sources into coherent ?chronicles? of events, userorientedmechanisms for querying and displaying the information, and compiling the requiredknowledge resources. This paper describes the overall information flow and technicalapproach designed to meet these aims within a Grid framework

    Archetype development and governance methodologies for the electronic health record

    Full text link
    [ES] La interoperabilidad semántica de la información sanitaria es un requisito imprescindible para la sostenibilidad de la atención sanitaria, y es fundamental para afrontar los nuevos retos sanitarios de un mundo globalizado. Esta tesis aporta nuevas metodologías para abordar algunos de los aspectos fundamentales de la interoperabilidad semántica, específicamente aquellos relacionados con la definición y gobernanza de modelos de información clínica expresados en forma de arquetipo. Las aportaciones de la tesis son: - Estudio de las metodologías de modelado existentes de componentes de interoperabilidad semántica que influirán en la definición de una metodología de modelado de arquetipos. - Análisis comparativo de los sistemas e iniciativas existentes para la gobernanza de modelos de información clínica. - Una propuesta de Metodología de Modelado de Arquetipos unificada que formalice las fases de desarrollo del arquetipo, los participantes requeridos y las buenas prácticas a seguir. - Identificación y definición de principios y características de gobernanza de arquetipos. - Diseño y desarrollo de herramientas que brinden soporte al modelado y la gobernanza de arquetipos. Las aportaciones de esta tesis se han puesto en práctica en múltiples proyectos y experiencias de desarrollo. Estas experiencias varían desde un proyecto local dentro de una sola organización que requirió la reutilización de datos clínicos basados en principios de interoperabilidad semántica, hasta el desarrollo de proyectos de historia clínica electrónica de alcance nacional.[CA] La interoperabilitat semàntica de la informació sanitària és un requisit imprescindible per a la sostenibilitat de l'atenció sanitària, i és fonamental per a afrontar els nous reptes sanitaris d'un món globalitzat. Aquesta tesi aporta noves metodologies per a abordar alguns dels aspectes fonamentals de la interoperabilitat semàntica, específicament aquells relacionats amb la definició i govern de models d'informació clínica expressats en forma d'arquetip. Les aportacions de la tesi són: - Estudi de les metodologies de modelatge existents de components d'interoperabilitat semàntica que influiran en la definició d'una metodologia de modelatge d'arquetips. - Anàlisi comparativa dels sistemes i iniciatives existents per al govern de models d'informació clínica. - Una proposta de Metodologia de Modelatge d'Arquetips unificada que formalitza les fases de desenvolupament de l'arquetip, els participants requerits i les bones pràctiques a seguir. - Identificació i definició de principis i característiques de govern d'arquetips. - Disseny i desenvolupament d'eines que brinden suport al modelatge i al govern d'arquetips. Les aportacions d'aquesta tesi s'han posat en pràctica en múltiples projectes i experiències de desenvolupament. Aquestes experiències varien des d'un projecte local dins d'una sola organització que va requerir la reutilització de dades clíniques basades en principis d'interoperabilitat semàntica, fins al desenvolupament de projectes d'història clínica electrònica d'abast nacional.[EN] Semantic interoperability of health information is an essential requirement for the sustainability of healthcare, and it is essential to face the new health challenges of a globalized world. This thesis provides new methodologies to tackle some of the fundamental aspects of semantic interoperability, specifically those aspects related to the definition and governance of clinical information models expressed in the form of archetypes. The contributions of the thesis are: - Study of existing modeling methodologies of semantic interoperability components that will influence in the definition of an archetype modeling methodology. - Comparative analysis of existing clinical information model governance systems and initiatives. - A proposal of a unified Archetype Modeling Methodology that formalizes the phases of archetype development, the required participants, and the good practices to be followed. - Identification and definition of archetype governance principles and characteristics. - Design and development of tools that provide support to archetype modeling and governance. The contributions of this thesis have been put into practice in multiple projects and development experiences. These experiences vary from a local project inside a single organization that required a reuse on clinical data based on semantic interoperability principles, to the development of national electronic health record projects.This thesis was partially funded by the Ministerio de Economía y Competitividad, ayudas para contratos para la formación de doctores en empresas “Doctorados Industriales”, grant DI-14-06564 and by the Agencia Valenciana de la Innovación, ayudas del Programa de Promoción del Talento – Doctorados empresariales (INNODOCTO), grant INNTA3/2020/12.Moner Cano, D. (2021). Archetype development and governance methodologies for the electronic health record [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16491

    A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems

    Get PDF
    As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation. Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way. The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved. This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches. A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system. This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach. While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Comparative study of healthcare messaging standards for interoperability in ehealth systems

    Get PDF
    Advances in the information and communication technology have created the field of "health informatics," which amalgamates healthcare, information technology and business. The use of information systems in healthcare organisations dates back to 1960s, however the use of technology for healthcare records, referred to as Electronic Medical Records (EMR), management has surged since 1990’s (Net-Health, 2017) due to advancements the internet and web technologies. Electronic Medical Records (EMR) and sometimes referred to as Personal Health Record (PHR) contains the patient’s medical history, allergy information, immunisation status, medication, radiology images and other medically related billing information that is relevant. There are a number of benefits for healthcare industry when sharing these data recorded in EMR and PHR systems between medical institutions (AbuKhousa et al., 2012). These benefits include convenience for patients and clinicians, cost-effective healthcare solutions, high quality of care, resolving the resource shortage and collecting a large volume of data for research and educational needs. My Health Record (MyHR) is a major project funded by the Australian government, which aims to have all data relating to health of the Australian population stored in digital format, allowing clinicians to have access to patient data at the point of care. Prior to 2015, MyHR was known as Personally Controlled Electronic Health Record (PCEHR). Though the Australian government took consistent initiatives there is a significant delay (Pearce and Haikerwal, 2010) in implementing eHealth projects and related services. While this delay is caused by many factors, interoperability is identified as the main problem (Benson and Grieve, 2016c) which is resisting this project delivery. To discover the current interoperability challenges in the Australian healthcare industry, this comparative study is conducted on Health Level 7 (HL7) messaging models such as HL7 V2, V3 and FHIR (Fast Healthcare Interoperability Resources). In this study, interoperability, security and privacy are main elements compared. In addition, a case study conducted in the NSW Hospitals to understand the popularity in usage of health messaging standards was utilised to understand the extent of use of messaging standards in healthcare sector. Predominantly, the project used the comparative study method on different HL7 (Health Level Seven) messages and derived the right messaging standard which is suitable to cover the interoperability, security and privacy requirements of electronic health record. The issues related to practical implementations, change over and training requirements for healthcare professionals are also discussed

    Design and Implementation of a Collaborative Clinical Practice and Research Documentation System Using SNOMED-CT and HL7-CDA in the Context of a Pediatric Neurodevelopmental Unit

    Get PDF
    This paper introduces a prototype for clinical research documentation using the structured information model HL7 CDA and clinical terminology (SNOMED CT). The proposed solution was integrated with the current electronic health record system (EHR-S) and aimed to implement interoperability and structure information, and to create a collaborative platform between clinical and research teams. The framework also aims to overcome the limitations imposed by classical documentation strategies in real-time healthcare encounters that may require fast access to complex information. The solution was developed in the pediatric hospital (HP) of the University Hospital Center of Coimbra (CHUC), a national reference for neurodevelopmental disorders, particularly for autism spectrum disorder (ASD), which is very demanding in terms of longitudinal and cross-sectional data throughput. The platform uses a three-layer approach to reduce components’ dependencies and facilitate maintenance, scalability, and security. The system was validated in a real-life context of the neurodevelopmental and autism unit (UNDA) in the HP and assessed based on the functionalities model of EHR-S (EHR-S FM) regarding their successful implementation and comparison with state-of-the-art alternative platforms. A global approach to the clinical history of neurodevelopmental disorders was worked out, providing transparent healthcare data coding and structuring while preserving information quality. Thus, the platform enabled the development of user-defined structured templates and the creation of structured documents with standardized clinical terminology that can be used in many healthcare contexts. Moreover, storing structured data associated with healthcare encounters supports a longitudinal view of the patient’s healthcare data and health status over time, which is critical in routine and pediatric research contexts. Additionally, it enables queries on population statistics that are key to supporting the definition of local and global policies, whose importance was recently emphasized by the COVID pandemic.info:eu-repo/semantics/publishedVersio

    Document Automation Architectures: Updated Survey in Light of Large Language Models

    Full text link
    This paper surveys the current state of the art in document automation (DA). The objective of DA is to reduce the manual effort during the generation of documents by automatically creating and integrating input from different sources and assembling documents conforming to defined templates. There have been reviews of commercial solutions of DA, particularly in the legal domain, but to date there has been no comprehensive review of the academic research on DA architectures and technologies. The current survey of DA reviews the academic literature and provides a clearer definition and characterization of DA and its features, identifies state-of-the-art DA architectures and technologies in academic research, and provides ideas that can lead to new research opportunities within the DA field in light of recent advances in generative AI and large language models.Comment: The current paper is the updated version of an earlier survey on document automation [Ahmadi Achachlouei et al. 2021]. Updates in the current paper are as follows: We shortened almost all sections to reduce the size of the main paper (without references) from 28 pages to 10 pages, added a review of selected papers on large language models, removed certain sections and most of diagrams. arXiv admin note: substantial text overlap with arXiv:2109.1160
    corecore