14 research outputs found

    Evaluation of a clinical decision support system for rare diseases: a qualitative study

    Get PDF
    Background Rare Diseases (RDs) are difficult to diagnose. Clinical Decision Support Systems (CDSS) could support the diagnosis for RDs. The Medical Informatics in Research and Medicine (MIRACUM) consortium developed a CDSS for RDs based on distributed clinical data from eight German university hospitals. To support the diagnosis for difficult patient cases, the CDSS uses data from the different hospitals to perform a patient similarity analysis to obtain an indication of a diagnosis. To optimize our CDSS, we conducted a qualitative study to investigate usability and functionality of our designed CDSS. Methods We performed a Thinking Aloud Test (TA-Test) with RDs experts working in Rare Diseases Centers (RDCs) at MIRACUM locations which are specialized in diagnosis and treatment of RDs. An instruction sheet with tasks was prepared that the participants should perform with the CDSS during the study. The TA-Test was recorded on audio and video, whereas the resulting transcripts were analysed with a qualitative content analysis, as a ruled-guided fixed procedure to analyse text-based data. Furthermore, a questionnaire was handed out at the end of the study including the System Usability Scale (SUS). Results A total of eight experts from eight MIRACUM locations with an established RDC were included in the study. Results indicate that more detailed information about patients, such as descriptive attributes or findings, can help the system perform better. The system was rated positively in terms of functionality, such as functions that enable the user to obtain an overview of similar patients or medical history of a patient. However, there is a lack of transparency in the results of the CDSS patient similarity analysis. The study participants often stated that the system should present the user with an overview of exact symptoms, diagnosis, and other characteristics that define two patients as similar. In the usability section, the CDSS received a score of 73.21 points, which is ranked as good usability. Conclusions This qualitative study investigated the usability and functionality of a CDSS of RDs. Despite positive feedback about functionality of system, the CDSS still requires some revisions and improvement in transparency of the patient similarity analysis

    Development and Evaluation of a Web-Based Paediatric Drug Information System for Germany

    Get PDF
    Background: Off-label use is frequent in paediatrics but that does not necessarily mean that the risk-benefit ratio is negative. Nevertheless, evidence-based data is essential for safe drug therapy. In Germany, there is no publicly available compendium providing transparent, evidence-based information for paediatric pharmacotherapy to date. This work describes the development of a web-based paediatric drug information system (PDIS) for Germany and its evaluation by health care professionals (HCP). Methods: Since 2012, a PDIS is being developed by the authors and is supported by the Federal Ministry of Health since 2016. Dosing recommendations were established based on systematic literature reviews and subsequent evaluation by clinical experts. The prototype was evaluated by HCP. Based on the results, the further development was concluded. Results: 92% of HCP believed that the PDIS could improve the quality of prescribing, as currently available information is deficient. Besides the license and formulations, dosing recommendations were the most relevant modules. A dosage calculator was the most wanted improvement. To facilitate sustainability of future development, a collaboration with the Dutch Kinderformularium was established. As of 2021, the database will be available to German HCP. Conclusion: The fundamentals for a German PDIS were established, and vital steps were taken towards successful continuation

    Empowering Researchers to Query Medical Data and Biospecimens by Ensuring Appropriate Usability of a Feasibility Tool: Evaluation Study

    No full text
    BackgroundThe Aligning Biobanking and Data Integration Centers Efficiently project aims to harmonize technologies and governance structures of German university hospitals and their biobanks to facilitate searching for patient data and biospecimens. The central element will be a feasibility tool for researchers to query the availability of samples and data to determine the feasibility of their study project. ObjectiveThe objectives of the study were as follows: an evaluation of the overall user interface usability of the feasibility tool, the identification of critical usability issues, comprehensibility of the underlying ontology operability, and analysis of user feedback on additional functionalities. From these, recommendations for quality-of-use optimization, focusing on more intuitive usability, were derived. MethodsTo achieve the study goal, an exploratory usability test consisting of 2 main parts was conducted. In the first part, the thinking aloud method (test participants express their thoughts aloud throughout their use of the tool) was complemented by a quantitative questionnaire. In the second part, the interview method was combined with supplementary mock-ups to collect users’ opinions on possible additional features. ResultsThe study cohort rated global usability of the feasibility tool based on the System Usability Scale with a good score of 81.25. The tasks assigned posed certain challenges. No participant was able to solve all tasks correctly. A detailed analysis showed that this was mostly because of minor issues. This impression was confirmed by the recorded statements, which described the tool as intuitive and user friendly. The feedback also provided useful insights regarding which critical usability problems occur and need to be addressed promptly. ConclusionsThe findings indicate that the prototype of the Aligning Biobanking and Data Integration Centers Efficiently feasibility tool is headed in the right direction. Nevertheless, we see potential for optimization primarily in the display of the search functions, the unambiguous distinguishability of criteria, and the visibility of their associated classification system. Overall, it can be stated that the combination of different tools used to evaluate the feasibility tool provided a comprehensive picture of its usability

    User Satisfaction Evaluation of the EHR4CR Query Builder: A Multisite Patient Count Cohort System

    Get PDF
    The Electronic Health Records for Clinical Research (EHR4CR) project aims to develop services and technology for the leverage reuse of Electronic Health Records with the purpose of improving the efficiency of clinical research processes. A pilot program was implemented to generate evidence of the value of using the EHR4CR platform. The user acceptance of the platform is a key success factor in driving the adoption of the EHR4CR platform; thus, it was decided to evaluate the user satisfaction. In this paper, we present the results of a user satisfaction evaluation for the EHR4CR multisite patient count cohort system. This study examined the ability of testers (n=22 and n=16 from 5 countries) to perform three main tasks (around 20 minutes per task), after a 30-minute period of self-training. The System Usability Scale score obtained was 55.83 (SD: 15.37), indicating a moderate user satisfaction. The responses to an additional satisfaction questionnaire were positive about the design of the interface and the required procedure to design a query. Nevertheless, the most complex of the three tasks proposed in this test was rated as difficult, indicating a need to improve the system regarding complicated queries

    User Satisfaction Evaluation of the EHR4CR Query Builder:A Multisite Patient Count Cohort System

    No full text
    The Electronic Health Records for Clinical Research (EHR4CR) project aims to develop services and technology for the leverage reuse of Electronic Health Records with the purpose of improving the efficiency of clinical research processes. A pilot program was implemented to generate evidence of the value of using the EHR4CR platform. The user acceptance of the platform is a key success factor in driving the adoption of the EHR4CR platform; thus, it was decided to evaluate the user satisfaction. In this paper, we present the results of a user satisfaction evaluation for the EHR4CR multisite patient count cohort system. This study examined the ability of testers (n=22 and n=16 from 5 countries) to perform three main tasks (around 20 minutes per task), after a 30-minute period of self-training. The System Usability Scale score obtained was 55.83 (SD: 15.37), indicating a moderate user satisfaction. The responses to an additional satisfaction questionnaire were positive about the design of the interface and the required procedure to design a query. Nevertheless, the most complex of the three tasks proposed in this test was rated as difficult, indicating a need to improve the system regarding complicated queries

    Development and Evaluation of a Web-Based Paediatric Drug Information System for Germany

    No full text
    Background: Off-label use is frequent in paediatrics but that does not necessarily mean that the risk-benefit ratio is negative. Nevertheless, evidence-based data is essential for safe drug therapy. In Germany, there is no publicly available compendium providing transparent, evidence-based information for paediatric pharmacotherapy to date. This work describes the development of a web-based paediatric drug information system (PDIS) for Germany and its evaluation by health care professionals (HCP). Methods: Since 2012, a PDIS is being developed by the authors and is supported by the Federal Ministry of Health since 2016. Dosing recommendations were established based on systematic literature reviews and subsequent evaluation by clinical experts. The prototype was evaluated by HCP. Based on the results, the further development was concluded. Results: 92% of HCP believed that the PDIS could improve the quality of prescribing, as currently available information is deficient. Besides the license and formulations, dosing recommendations were the most relevant modules. A dosage calculator was the most wanted improvement. To facilitate sustainability of future development, a collaboration with the Dutch Kinderformularium was established. As of 2021, the database will be available to German HCP. Conclusion: The fundamentals for a German PDIS were established, and vital steps were taken towards successful continuation

    Development of a Standardized Rating Tool for Drug Alerts to Reduce Information Overload

    No full text
    Summary Background: A well-known problem in current clinical decision support systems (CDSS) is the high number of alerts, which are often medically incorrect or irrelevant. This may lead to the so-called alert fatigue, an over -riding of alerts, including those that are clinically relevant, and underuse of CDSS in general. Objectives: The aim of our study was to develop and to apply a standardized tool that allows its users to evaluate the quality of system-generated drug alerts. The users’ ratings can subsequently be used to derive recommendations for developing a filter function to reduce irrelevant alerts. Methods: We developed a rating tool for drug alerts and performed a web-based evaluation study that also included a user review of alerts. In this study the following categories were evaluated: “data linked correctly”, “medically correct”, “action required”, “medication change”, “critical alert”, “information gained” and “show again”. For this purpose, 20 anonymized clinical cases were randomly selected and displayed in our customized CDSS research prototype, which used the summary of product characteristics (SPC) for alert generation. All the alerts that were provided were evaluated by 13 physicians. The users’ ratings were used to derive a filtering algorithm to reduce overalerting. Results: In total, our CDSS research prototype generated 399 alerts. In 98 % of all alerts, medication data were rated as linked correctly to drug information; in 93 %, the alerts were assessed as “medically correct”; 19.5 % of all alerts were rated as “show again”. The interrater-agreement was, on average, 68.4 %. After the application of our filtering algorithm, the rate of alerts that should be shown again decreased to 14.8 %. Conclusions: The new standardized rating tool supports a standardized feedback of user-perceived clinical relevance of CDSS alerts. Overall, the results indicated that physicians may consider the majority of alerts formally correct but clinically irrelevant and override them. Filtering may help to reduce overalerting and increase the specificity of a CDSS.</jats:p
    corecore