6 research outputs found

    The experiences of 33 national COVID-19 dashboard teams during the first year of the pandemic in the World Health Organization European Region: A qualitative study

    Get PDF
    Background: Governments across the World Health Organization (WHO) European Region have prioritised dashboards for reporting COVID-19 data. The ubiquitous use of dashboards for public reporting is a novel phenomenon. Objective: This study explores the development of COVID-19 dashboards during the first year of the pandemic and identifies common barriers, enablers and lessons from the experiences of teams responsible for their development. Methods: We applied multiple methods to identify and recruit COVID-19 dashboard teams, using a purposive, quota sampling approach. Semi-structured group interviews were conducted from April to June 2021. Using elaborative coding and thematic analysis, we derived descriptive and explanatory themes from the interview data. A validation workshop was held with study participants in June 2021. Results: Eighty informants participated, representing 33 national COVID-19 dashboard teams across the WHO European Region. Most dashboards were launched swiftly during the first months of the pandemic, February to May 2020. The urgency, intense workload, limited human resources, data and privacy constraints and public scrutiny were common challenges in the initial development stage. Themes related to barriers or enablers were identified, pertaining to the pre-pandemic context, pandemic itself, people and processes and software, data and users. Lessons emerged around the themes of simplicity, trust, partnership, software and data and change. Conclusions: COVID-19 dashboards were developed in a learning-by-doing approach. The experiences of teams reveal that initial underpreparedness was offset by high-level political endorsement, the professionalism of teams, accelerated data improvements and immediate support with commercial software solutions. To leverage the full potential of dashboards for health data reporting, investments are needed at the team, national and pan-European levels

    Features Constituting Actionable COVID-19 Dashboards:Descriptive Assessment and Expert Appraisal of 158 Public Web-Based COVID-19 Dashboards

    Get PDF
    Background: Since the outbreak of COVID-19, the development of dashboards as dynamic, visual tools for communicating COVID-19 data has surged worldwide. Dashboards can inform decision-making and support behavior change. To do so, they must be actionable. The features that constitute an actionable dashboard in the context of the COVID-19 pandemic have not been rigorously assessed. Objective: The aim of this study is to explore the characteristics of public web-based COVID-19 dashboards by assessing their purpose and users (“why”), content and data (“what”), and analyses and displays (“how” they communicate COVID-19 data), and ultimately to appraise the common features of highly actionable dashboards. Methods: We conducted a descriptive assessment and scoring using nominal group technique with an international panel of experts (n=17) on a global sample of COVID-19 dashboards in July 2020. The sequence of steps included multimethod sampling of dashboards; development and piloting of an assessment tool; data extraction and an initial round of actionability scoring; a workshop based on a preliminary analysis of the results; and reconsideration of actionability scores followed by joint determination of common features of highly actionable dashboards. We used descriptive statistics and thematic analysis to explore the findings by research question. Results: A total of 158 dashboards from 53 countries were assessed. Dashboards were predominately developed by government authorities (100/158, 63.0%) and were national (93/158, 58.9%) in scope. We found that only 20 of the 158 dashboards (12.7%) stated both their primary purpose and intended audience. Nearly all dashboards reported epidemiological indicators (155/158, 98.1%), followed by health system management indicators (85/158, 53.8%), whereas indicators on social and economic impact and behavioral insights were the least reported (7/158, 4.4% and 2/158, 1.3%, respectively). Approximately a quarter of the dashboards (39/158, 24.7%) did not report their data sources. The dashboards predominately reported time trends and disaggregated data by two geographic levels and by age and sex. The dashboards used an average of 2.2 types of displays (SD 0.86); these were mostly graphs and maps, followed by tables. To support data interpretation, color-coding was common (93/158, 89.4%), although only one-fifth of the dashboards (31/158, 19.6%) included text explaining the quality and meaning of the data. In total, 20/158 dashboards (12.7%) were appraised as highly actionable, and seven common features were identified between them. Actionable COVID-19 dashboards (1) know their audience and information needs; (2) manage the type, volume, and flow of displayed information; (3) report data sources and methods clearly; (4) link time trends to policy decisions; (5) provide data that are “close to home”; (6) break down the population into relevant subgroups; and (7) use storytelling and visual cues. Conclusions: COVID-19 dashboards are diverse in the why, what, and how by which they communicate insights on the pandemic and support data-driven decision-making. To leverage their full potential, dashboard developers should consider adopting the seven actionability features identified

    The contribution of benchmarking to quality improvement in healthcare. A systematic literature review

    No full text
    BACKGROUND: Benchmarking has been recognised as a valuable method to help identify strengths and weaknesses at all levels of the healthcare system. Despite a growing interest in the practice and study of benchmarking, its contribution to quality of care have not been well elucidated. As such, we conducted a systematic literature review with the aim of synthesizing the evidence regarding the relationship between benchmarking and quality improvement. We also sought to provide evidence on the associated strategies that can be used to further stimulate quality improvement. METHODS: We searched three databases (PubMed, Web of Science and Scopus) for articles studying the impact of benchmarking on quality of care (processes and outcomes). Following assessment of the articles for inclusion, we conducted data analysis, quality assessment and critical synthesis according to the PRISMA guidelines for systematic literature review. RESULTS: A total of 17 articles were identified. All studies reported a positive association between the use of benchmarking and quality improvement in terms of processes (N = 10), outcomes (N = 13) or both (N = 7). In the majority of studies (N = 12), at least one intervention, complementary to benchmarking, was undertaken to stimulate quality improvement. The interventions ranged from meetings between participants to quality improvement plans and financial incentives. A combination of multiple interventions was present in over half of the studies (N = 10). CONCLUSIONS: The results generated from this review suggest that the practice of benchmarking in healthcare is a growing field, and more research is needed to better understand its effects on quality improvement. Furthermore, our findings indicate that benchmarking may stimulate quality improvement, and that interventions, complementary to benchmarking, seem to reinforce this improvement. Although this study points towards the benefit of combining performance measurement with interventions in terms of quality, future research should further analyse the impact of these interventions individually. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12913-022-07467-8

    Exploring changes to the actionability of COVID-19 dashboards over the course of 2020 in the Canadian context: Descriptive assessment and expert appraisal study

    No full text
    Background: Public web-based COVID-19 dashboards are in use worldwide to communicate pandemic-related information. Actionability of dashboards, as a predictor of their potential use for data-driven decision-making, was assessed in a global study during the early stages of the pandemic. It revealed a widespread lack of features needed to support actionability. In view of the inherently dynamic nature of dashboards and their unprecedented speed of creation, the evolution of dashboards and changes to their actionability merit exploration. Objective: We aimed to explore how COVID-19 dashboards evolved in the Canadian context during 2020 and whether the presence of actionability features changed over time. Methods: We conducted a descriptive assessment of a pan-Canadian sample of COVID-19 dashboards (N=26), followed by an appraisal of changes to their actionability by a panel of expert scorers (N=8). Scorers assessed the dashboards at two points in time, July and November 2020, using an assessment tool informed by communication theory and health care performance intelligence. Applying the nominal group technique, scorers were grouped in panels of three, and evaluated the presence of the seven defined features of highly actionable dashboards at each time point. Results: Improvements had been made to the dashboards over time. These predominantly involved data provision (specificity of geographic breakdowns, range of indicators reported, and explanations of data sources or calculations) and advancements enabled by the technologies employed (customization of time trends and interactive or visual chart elements). Further improvements in actionability were noted especially in features involving local-level data provision, time-trend reporting, and indicator management. No improvements were found in communicative elements (clarity of purpose and audience), while the use of storytelling techniques to narrate trends remained largely absent from the dashboards. Conclusions: Improvements to COVID-19 dashboards in the Canadian context during 2020 were seen mostly in data availability and dashboard technology. Further improving the actionability of dashboards for public reporting will require attention to both technical and organizational aspects of dashboard development. Such efforts would include better skill-mixing across disciplines, continued investment in data standards, and clearer mandates for their developers to ensure accountability and the development of purpose-driven dashboards

    The experiences of 33 national COVID-19 dashboard teams during the first year of the pandemic in the World Health Organization European Region: A qualitative study

    Get PDF
    Background: Governments across the World Health Organization (WHO) European Region have prioritised dashboards for reporting COVID-19 data. The ubiquitous use of dashboards for public reporting is a novel phenomenon. Objective: This study explores the development of COVID-19 dashboards during the first year of the pandemic and identifies common barriers, enablers and lessons from the experiences of teams responsible for their development. Methods: We applied multiple methods to identify and recruit COVID-19 dashboard teams, using a purposive, quota sampling approach. Semi-structured group interviews were conducted from April to June 2021. Using elaborative coding and thematic analysis, we derived descriptive and explanatory themes from the interview data. A validation workshop was held with study participants in June 2021. Results: Eighty informants participated, representing 33 national COVID-19 dashboard teams across the WHO European Region. Most dashboards were launched swiftly during the first months of the pandemic, February to May 2020. The urgency, intense workload, limited human resources, data and privacy constraints and public scrutiny were common challenges in the initial development stage. Themes related to barriers or enablers were identified, pertaining to the pre-pandemic context, pandemic itself, people and processes and software, data and users. Lessons emerged around the themes of simplicity, trust, partnership, software and data and change. Conclusions: COVID-19 dashboards were developed in a learning-by-doing approach. The experiences of teams reveal that initial underpreparedness was offset by high-level political endorsement, the professionalism of teams, accelerated data improvements and immediate support with commercial software solutions. To leverage the full potential of dashboards for health data reporting, investments are needed at the team, national and pan-European levels

    Features Constituting Actionable COVID-19 Dashboards: Descriptive Assessment and Expert Appraisal of 158 Public Web-Based COVID-19 Dashboards

    No full text
    Background: Since the outbreak of COVID-19, the development of dashboards as dynamic, visual tools for communicating COVID-19 data has surged worldwide. Dashboards can inform decision-making and support behavior change. To do so, they must be actionable. The features that constitute an actionable dashboard in the context of the COVID-19 pandemic have not been rigorously assessed. Objective: The aim of this study is to explore the characteristics of public web-based COVID-19 dashboards by assessing their purpose and users (“why”), content and data (“what”), and analyses and displays (“how” they communicate COVID-19 data), and ultimately to appraise the common features of highly actionable dashboards. Methods: We conducted a descriptive assessment and scoring using nominal group technique with an international panel of experts (n=17) on a global sample of COVID-19 dashboards in July 2020. The sequence of steps included multimethod sampling of dashboards; development and piloting of an assessment tool; data extraction and an initial round of actionability scoring; a workshop based on a preliminary analysis of the results; and reconsideration of actionability scores followed by joint determination of common features of highly actionable dashboards. We used descriptive statistics and thematic analysis to explore the findings by research question. Results: A total of 158 dashboards from 53 countries were assessed. Dashboards were predominately developed by government authorities (100/158, 63.0%) and were national (93/158, 58.9%) in scope. We found that only 20 of the 158 dashboards (12.7%) stated both their primary purpose and intended audience. Nearly all dashboards reported epidemiological indicators (155/158, 98.1%), followed by health system management indicators (85/158, 53.8%), whereas indicators on social and economic impact and behavioral insights were the least reported (7/158, 4.4% and 2/158, 1.3%, respectively). Approximately a quarter of the dashboards (39/158, 24.7%) did not report their data sources. The dashboards predominately reported time trends and disaggregated data by two geographic levels and by age and sex. The dashboards used an average of 2.2 types of displays (SD 0.86); these were mostly graphs and maps, followed by tables. To support data interpretation, color-coding was common (93/158, 89.4%), although only one-fifth of the dashboards (31/158, 19.6%) included text explaining the quality and meaning of the data. In total, 20/158 dashboards (12.7%) were appraised as highly actionable, and seven common features were identified between them. Actionable COVID-19 dashboards (1) know their audience and information needs; (2) manage the type, volume, and flow of displayed information; (3) report data sources and methods clearly; (4) link time trends to policy decisions; (5) provide data that are “close to home”; (6) break down the population into relevant subgroups; and (7) use storytelling and visual cues. Conclusions: COVID-19 dashboards are diverse in the why, what, and how by which they communicate insights on the pandemic and support data-driven decision-making. To leverage their full potential, dashboard developers should consider adopting the seven actionability features identified
    corecore