10 research outputs found
Accounting for quality improvement during the conduct of embedded pragmatic clinical trials within healthcare systems: NIH Collaboratory case studies
Embedded pragmatic clinical trials (ePCTs) and quality improvement (QI) activities often occur simultaneously within healthcare systems (HCSs). Embedded PCTs within HCSs are conducted to test interventions and provide evidence that may impact public health, health system operations, and quality of care. They are larger and more broadly generalizable than QI initiatives, and may generate what is considered high-quality evidence for potential use in care and clinical practice guidelines. QI initiatives often co-occur with ePCTs and address the same high-impact health questions, and this co-occurrence may dilute or confound the ability to detect change as a result of the ePCT intervention. During the design, pilot, and conduct phases of the large-scale NIH Collaboratory Demonstration ePCTs, many QI initiatives occurred at the same time within the HCSs. Although the challenges varied across the projects, some common, generalizable strategies and solutions emerged, and we share these as case studies.
KEY LESSONS: Study teams often need to monitor, adapt, and respond to QI during design and the course of the trial. Routine collaboration between ePCT researchers and health systems stakeholders throughout the trial can help ensure research and QI are optimally aligned to support high-quality patient-centered care
Pragmatic clinical trials embedded in healthcare systems: generalizable lessons from the NIH Collaboratory
Background: The clinical research enterprise is not producing the evidence decision makers arguably need in a timely and cost effective manner; research currently involves the use of labor-intensive parallel systems that are separate from clinical care. The emergence of pragmatic clinical trials (PCTs) poses a possible solution: these large-scale trials are embedded within routine clinical care and often involve cluster randomization of hospitals, clinics, primary care providers, etc. Interventions can be implemented by health system personnel through usual communication channels and quality improvement infrastructure, and data collected as part of routine clinical care. However, experience with these trials is nascent and best practices regarding design operational, analytic, and reporting methodologies are undeveloped. Methods: To strengthen the national capacity to implement cost-effective, large-scale PCTs, the Common Fund of the National Institutes of Health created the Health Care Systems Research Collaboratory (Collaboratory) to support the design, execution, and dissemination of a series of demonstration projects using a pragmatic research design. Results: In this article, we will describe the Collaboratory, highlight some of the challenges encountered and solutions developed thus far, and discuss remaining barriers and opportunities for large-scale evidence generation using PCTs. Conclusion: A planning phase is critical, and even with careful planning, new challenges arise during execution; comparisons between arms can be complicated by unanticipated changes. Early and ongoing engagement with both health care system leaders and front-line clinicians is critical for success. There is also marked uncertainty when applying existing ethical and regulatory frameworks to PCTS, and using existing electronic health records for data capture adds complexity
A matching activity when entering higher education: ongoing guidance for the students or efficiency instrument for the school?
In order to lower dropout rates and stimulate student success in higher education, the Dutch government implemented a new law demanding that every higher education institute offer a matching activity to applying students. This article evaluates how students and teachers experience this matching activity. Data were collected in a Dutch university of applied sciences through questionnaire (students: n = 1711, teachers: n = 52) and interview research (students: n = 136, teachers, n = 36). Results provide insights into useful and improvable aspects of the matching procedure. It also reveals a tension related to a ‘conflicting perspective’: the matching activity can be used as a selection-oriented instrument or as a guidance instrument, which leads to different perceptions on the effectiveness of the instrument
Effects of a data-based decision making intervention on student achievement
Data-based decision making (DBDM) is becoming important for teachers due to increasing amounts of digital feedback on student performance. In the quasi-experimental study reported here, teachers, principals, and academic coaches from 42 schools were trained for two years in using the results of half-year interim assessments for providing students with tailor-made instruction. Our results did not show any main effects of this DBDM training trajectory on student achievement but did indicate interaction effects with students’ low prior achievement levels and socioeconomic status. Teachers experience difficulties in translating student progress data into adaptive instruction in the classroom. Implications of our findings for teacher professionalization are discussed
Oversight on the borderline: Quality improvement and pragmatic research
Pragmatic research that compares interventions to improve the organization and delivery of health care may overlap, in both goals and methods, with quality improvement (QI) activities. When activities have attributes of both research and QI, confusion often arises about what ethical oversight is, or should be, required. For routine QI, in which the delivery of health care is modified in minor ways that create only minimal risks, oversight by local clinical or administrative leaders utilizing institutional policies may be sufficient. However, additional consideration should be given to activities that go beyond routine, local QI to first determine whether such non-routine activities constitute research or QI and, in either case, to ensure that independent oversight will occur. This should promote rigor, transparency, and protection of patients’ and clinicians’ rights, well-being, and privacy in all such activities. Specifically, we recommend: 1. Health care organizations should have systematic policies and processes for designating activities as routine QI, non-routine QI, or QI research, and determining what oversight each will receive. 2. Health care organizations should have formal and explicit oversight processes for non-routine QI activities that may include input from institutional QI experts, health services researchers, administrators, clinicians, patient representatives, and those experienced in the ethics review of health care activities. 3. QI research requires review by an IRB; for such review to be effective, IRBs should develop particular expertise in assessing QI research. 4. Stakeholders should be included in the review of non-routine QI and QI-related research proposals. Only by doing so will we optimally leverage both pragmatic research on health care delivery and local implementation through QI as complementary activities for improving health
Oversight on the borderline: Quality improvement and pragmatic research
Pragmatic research that compares interventions to improve the organization and delivery of health care may overlap, in both goals and methods, with quality improvement activities. When activities have attributes of both research and quality improvement, confusion often arises about what ethical oversight is, or should be, required. For routine quality improvement, in which the delivery of health care is modified in minor ways that create only minimal risks, oversight by local clinical or administrative leaders utilizing institutional policies may be sufficient. However, additional consideration should be given to activities that go beyond routine, local quality improvement to first determine whether such non-routine activities constitute research or quality improvement and, in either case, to ensure that independent oversight will occur. This should promote rigor, transparency, and protection of patients' and clinicians' rights, well-being, and privacy in all such activities. Specifically, we recommend that (1) health care organizations should have systematic policies and processes for designating activities as routine quality improvement, non-routine quality improvement, or quality improvement research and determining what oversight each will receive. (2) Health care organizations should have formal and explicit oversight processes for non-routine quality improvement activities that may include input from institutional quality improvement experts, health services researchers, administrators, clinicians, patient representatives, and those experienced in the ethics review of health care activities. (3) Quality improvement research requires review by an institutional review board; for such review to be effective, institutional review boards should develop particular expertise in assessing quality improvement research. (4) Stakeholders should be included in the review of non-routine quality improvement and quality improvement-related research proposals. Only by doing so will we optimally leverage both pragmatic research on health care delivery and local implementation through quality improvement as complementary activities for improving health
Effects of a data-based decision making intervention on student achievement
Data-based decision making (DBDM) is becoming important for teachers due to increasing amounts of digital feedback on student performance. In the quasi-experimental study reported here, teachers, principals, and academic coaches from 42 schools were trained for two years in using the results of half-year interim assessments for providing students with tailor-made instruction. Our results did not show any main effects of this DBDM training trajectory on student achievement but did indicate interaction effects with students’ low prior achievement levels and socioeconomic status. Teachers experience difficulties in translating student progress data into adaptive instruction in the classroom. Implications of our findings for teacher professionalization are discussed
Recommended from our members
Addressing guideline and policy changes during pragmatic clinical trials.
While conducting a set of large-scale multi-site pragmatic clinical trials involving high-impact public health issues such as end-stage renal disease, opioid use, and colorectal cancer, there were substantial changes to both policies and guidelines relevant to the trials. These external changes gave rise to unexpected challenges for the trials, including decisions regarding how to respond to new clinical practice guidelines, increased difficulty in implementing trial interventions, achieving separation between treatment groups, and differential responses across sites. In this article, we describe these challenges and the approaches used to address them. When deliberating appropriate action in the face of external changes during a pragmatic clinical trial, we recommend considering the well-being of the participants, clinical equipoise, and the strength and quality of the evidence associated with the change; involving those charged with data and safety monitoring; and where possible, planning for potential external changes as the trial is being designed. Any solution must balance the primary obligation to protect the well-being of participants with the secondary obligation to protect the integrity of the trial in order to gain meaningful answers to important public health questions