47,466 research outputs found

    Validating archetypes for the Multiple Sclerosis Functional Composite

    Get PDF
    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time- consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool- enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model

    Validating archetypes for the Multiple Sclerosis Functional Composite

    Get PDF
    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time- consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool- enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model

    Information sharing performance management: a semantic interoperability assessment in the maritime surveillance domain

    Get PDF
    Information Sharing (IS) is essential for organizations to obtain information in a cost-effective way. If the existing information is not shared among the organizations that hold it, the alternative is to develop the necessary capabilities to acquire, store, process and manage it, which will lead to duplicated costs, especially unwanted if governmental organizations are concerned. The European Commission has elected IS among public administrations as a priority, has launched several IS initiatives, such as the EUCISE2020 project within the roadmap for developing the maritime Common Information Sharing Environment (CISE), and has defined the levels of interoperability essential for IS, which entail Semantic Interoperability (SI). An open question is how can IS performance be managed? Specifically, how can IS as-is, and to-be states and targets be defined, and how can organizations progress be monitored and controlled? In this paper, we propose 11 indicators for assessing SI that contribute to answering these questions. They have been demonstrated and evaluated with the data collected through a questionnaire, based on the CISE information model proposed during the CoopP project, which was answered by five public authorities that require maritime surveillance information and are committed to share information with each other.Postprint (published version

    BlogForever D3.2: Interoperability Prospects

    Get PDF
    This report evaluates the interoperability prospects of the BlogForever platform. Therefore, existing interoperability models are reviewed, a Delphi study to identify crucial aspects for the interoperability of web archives and digital libraries is conducted, technical interoperability standards and protocols are reviewed regarding their relevance for BlogForever, a simple approach to consider interoperability in specific usage scenarios is proposed, and a tangible approach to develop a succession plan that would allow a reliable transfer of content from the current digital archive to other digital repositories is presented

    A global approach to digital library evaluation towards quality interoperability

    Get PDF
    This paper describes some of the key research works related to my PhD thesis. The goal is the development of a global approach to digital library (DL) evaluation towards quality interoperability. DL evaluation has a vital role to play in building DLs, and in understanding and enhancing their role in society. Responding to two parallel research needs, the project is grouped around two tracks. Track one covers the theoretical approach, and provides an integrated evaluation model which overcomes the fragmentation of quality assessments; track two covers the experimental side, which has been undertaken through a comparative analysis of different DL evaluation methodologies, relating them to the conceptual framework. After presenting the problem dentition, current background and related work, this paper enumerates a set of research questions and hypotheses that I would like to address, and outlines the research methodology, focusing on a proposed evaluation framework and on the lessons learned from the case studies
    corecore