23 research outputs found

    Numbers and narratives: How qualitative methods can strengthen the science of paediatric antimicrobial stewardship

    Get PDF
    Antimicrobial and diagnostic stewardship initiatives have become increasingly important in paediatric settings. The value of qualitative approaches to conduct stewardship work in paediatric patients is being increasingly recognized. This article seeks to provide an introduction to basic elements of qualitative study designs and provide an overview of how these methods have successfully been applied to both antimicrobial and diagnostic stewardship work in paediatric patients. A multidisciplinary team of experts in paediatric infectious diseases, paediatric critical care and qualitative methods has written a perspective piece introducing readers to qualitative stewardship work in children, intended as an overview to highlight the importance of such methods and as a starting point for further work. We describe key differences between qualitative and quantitative methods, and the potential benefits of qualitative approaches. We present examples of qualitative research in five discrete topic areas of high relevance for paediatric stewardship work: provider attitudes; provider prescribing behaviours; stewardship in low-resource settings; parents\u27 perspectives on stewardship; and stewardship work focusing on select high-risk patients. Finally, we explore the opportunities for multidisciplinary academic collaboration, incorporation of innovative scientific disciplines and young investigator growth through the use of qualitative research in paediatric stewardship. Qualitative approaches can bring rich insights and critically needed new information to antimicrobial and diagnostic stewardship efforts in children. Such methods are an important tool in the armamentarium against worsening antimicrobial resistance, and a major opportunity for investigators interested in moving the needle forward for stewardship in paediatric patients

    Analysis of seasonal variation of antibiotic prescribing for respiratory tract diagnoses in primary care practices

    Get PDF
    Abstract Objective: To determine antibiotic prescribing appropriateness for respiratory tract diagnoses (RTD) by season. Design: Retrospective cohort study. Setting: Primary care practices in a university health system. Patients: Patients who were seen at an office visit with diagnostic code for RTD. Methods: Office visits for the entire cohort were categorized based on ICD-10 codes by the likelihood that an antibiotic was indicated (tier 1: always indicated; tier 2: sometimes indicated; tier 3: rarely indicated). Medical records were reviewed for 1,200 randomly selected office visits to determine appropriateness. Based on this reference standard, metrics and prescriber characteristics associated with inappropriate antibiotic prescribing were determined. Characteristics of antibiotic prescribing were compared between winter and summer months. Results: A significantly greater proportion of RTD visits had an antibiotic prescribed in winter [20,558/51,090 (40.2%)] compared to summer months [11,728/38,537 (30.4%)][standardized difference (SD) = 0.21]. A significantly greater proportion of winter compared to summer visits was associated with tier 2 RTDs (29.4% vs 23.4%, SD = 0.14), but less tier 3 RTDs (68.4% vs 74.4%, SD = 0.13). A greater proportion of visits in winter compared to summer months had an antibiotic prescribed for tier 2 RTDs (80.2% vs 74.2%, SD = 0.14) and tier 3 RTDs (22.9% vs 16.2%, SD = 0.17). The proportion of inappropriate antibiotic prescribing was higher in winter compared to summer months (72.4% vs 62.0%, P < .01). Conclusions: Increases in antibiotic prescribing for RTD visits from summer to winter were likely driven by shifts in diagnoses as well as increases in prescribing for certain diagnoses. At least some of this increased prescribing was inappropriate

    The complexity of simple things: An ethnographic study of the challenge of preventing hospital-acquired infections

    No full text
    Hospital-acquired infections (HAIs) are one of the most common complications of medical care in the United States (U.S.). They exact a tremendous toll in morbidity, mortality, and costs. Despite increased attention, knowledge of why they occur, the availability of evidence-based, inexpensive practices to prevent their spread, and policies meant to incentive hospitals to reduce them, HAIs remain common. Many U.S. hospitals do not adopt basic practices to prevent HAIs and the ones that do adopt these practices have considerable difficulty in getting frontline clinical staff to reliably comply with them. This dissertation investigates why implementation and compliance in HAI prevention is so challenging. The findings are based on 22 months of ethnographic observation at one hospital that undertook an organization-wide initiative to prevent HAIs. I combine observational data with 103 in-depth interviews with a sample of hospital staff that varied by occupational group, location in the organizational hierarchy, and work area. In three empirical chapters, the study illustrates how the social context in which HAI prevention efforts occur shapes the enactment of simple and high-impact infection prevention practices. First, I find that while frontline clinical staff recognize that preventing HAI is important, they describe this goal as occasionally coming into conflict with other equally important goals involved in providing patient care. Second, I find that the formal criteria of what counts as an HAI is frequently ambiguous and poorly aligned with what frontline clinical staff believe to be truly hospital acquired. Tightly coupling accountability sanctions, like nonpayment for HAIs or public reporting of HAI rates, may, via reactivity mechanisms, have the unintended consequence of making infection prevention more difficult by threatening its credibility as an organizational goal. Third, I find that despite significant resources devoted to improving the hospital\u27s safety culture in order to change healthcare worker behavior to reduce HAIs, this formal improvement program was limited in its ability to transform organizational practice. I argue that this is because it did not address politically uncomfortable, yet crucial, social dynamics such as politics, and inequalities in power and authority that contributed to infection risks

    Expert Perspectives on the Performance of Explosive Detection Canines: Performance Degrading Factors

    No full text
    The explosive detection canine (EDC) team is currently the best available mobile sensor capability in the fight against explosive threats. While the EDC can perform at a high level, the EDC team faces numerous factors during the search process that may degrade performance. Understanding these factors is key to effective selection, training, assessment, deployment, and operationalizable research. A systematic description of these factors is absent from the literature. This qualitative study leveraged the perspectives of expert EDC handlers, trainers, and leaders (n = 17) to determine the factors that degrade EDC performance. The participants revealed factors specific to utilization, the EDC team, and the physical, climate, operational, and explosive odor environments. Key results were the reality of performance degradation, the impact of the handler, and the importance of preparation. This study’s results can help improve EDC selection, training, assessment, and deployment and further research into sustaining EDC performance

    Expert Perspectives on the Performance of Explosive Detection Canines: Operational Requirements

    No full text
    Explosive detection canines (EDC) play an important role in protecting people and property. The utilization of and research on EDCs is often based on personal experience or incomplete knowledge. EDC practitioners (handlers, trainers, and leaders) possess the institutional knowledge necessary to understand EDC operational requirements. This study utilized a qualitative approach with semi-structured interviews of EDC experts (n = 17) from across the employment spectrum. The interviews elicited EDC expert perceptions of the performance of the EDC team and the operational requirements in the physical, climate, operational, and explosive odor environments. Analysis of the data revealed commonalities across all EDCs and utilization-specific differences. To be effective, the EDC team must function well on both ends of the leash, and the handler likely has the greatest impact on the EDC’s performance. Common requirements include expectations to perform at a high level in a variety of manmade and natural physical environments and under a range of climate conditions. EDCs must work through the visual, olfactory, and auditory challenges of the operational environment and the countermeasure efforts of those utilizing explosive devices. Utilization-specific differences like patrol or assault training and utilization add additional requirements for some EDCs. The results of this study can be used to inform EDC selection, training, assessment, and deployment, and further research into EDC performance

    An innovative sequential mixed-methods approach to evaluating clinician acceptability during implementation of a standardized labor induction protocol

    No full text
    Abstract Background Implementation outcomes, including acceptability, are of critical importance in both implementation research and practice. The gold standard measure of acceptability, Acceptability of Intervention Measure (AIM), skews positively with a limited range. In an ongoing hybrid effectiveness-implementation trial, we aimed to evaluate clinician acceptability of induction standardization. Here, we describe an innovative mixed-methods approach to maximize the interpretability of the AIM using a case study in maternal health. Methods In this explanatory sequential mixed methods study, we distributed the validated, 4-question AIM (total 4–20) to labor and delivery clinicians 6 months post-implementation at 2 sites (Site 1: 3/2021; Site 2: 6/2021). Respondents were grouped by total score into tertiles. The top (“High” Acceptability) and bottom (“Low” Acceptability) tertiles were invited to participate in a 30-minute semi-structured qualitative interview from 6/2021 to 10/2021 until thematic saturation was reached in each acceptability group. Participants were purposively sampled by role and site. Interviews were coded using an integrated approach, incorporating a priori attributes (Consolidated Framework for Implementation Research constructs) into a modified content analysis approach. Results 104 clinicians completed the initial survey; 24 were interviewed (12 “High” and 12 “Low” Acceptability). Median total AIM scores were 20/20 IQR[20–20] in the High and 12.5/20 IQR[11–14] in the Low Acceptability groups. In both groups, clinicians were enthusiastic about efforts to standardize labor induction, believing it reduces inter-clinician variability and improves equitable, evidence-based care. In the Low Acceptability group, clinicians stated the need for flexibility and consideration for patient uniqueness. Rarely, clinicians felt labor induction could not or should not be standardized, citing discomfort with medicalization of labor, and concerns with “bulldozing” the patient with interventions. Suggested strategies for overcoming negative sentiment included comprehensive clinician education, as well as involving patients as active participants in the protocol prenatally. Conclusions This study utilized AIM in an innovative sequential mixed-methods approach to characterize clinician acceptability, which may be generalizable across implementation endeavors. By performing this work during a hybrid trial, implementation strategies to improve acceptability emerged (clinician education focusing on respect for flexibility; involving patients as active participants prenatally) for year 2, which will inform future multi-site work

    Factors that contribute to disparities in time to acute leukemia diagnosis in young people: an in depth qualitative interview study.

    No full text
    BackgroundRacial and ethnic disparities in outcomes for Black and Hispanic children with acute leukemia have been well documented, however little is known about the determinants of diagnostic delays in pediatric leukemia in the United States. The primary objective of this study is to identify factors contributing to delays preceding a pediatric leukemia diagnosis.MethodsThis qualitative study utilized in-depth semi-structured interviews. Parents and/or patients within two years of receiving a new acute leukemia diagnosis were asked to reflect upon their family's experiences preceding the patient's diagnosis. Subjects were purposively sampled for maximum variation in race, ethnicity, income, and language. Interviews were analyzed using inductive theory-building and the constant comparative method to understand the process of diagnosis. Chart review was conducted to complement qualitative data.ResultsThirty-two interviews were conducted with a diverse population of English and Spanish speaking participants from two tertiary care pediatric cancer centers. Parents reported feeling frustrated when their intuition conflicted with providers' management decisions. Many felt laboratory testing was not performed soon enough. Additional contributors to delays included misattribution of vague symptoms to more common diagnoses, difficulties in obtaining appointments, and financial disincentives to seek urgent or emergent care. Reports of difficulty obtaining timely appointments and financial concerns were disproportionately raised among low-income Black and Hispanic participants. Comparatively, parents with prior healthcare experiences felt better able to navigate the system and advocate for additional testing at symptom onset.ConclusionsWhile there are disease-related factors contributing to delays in diagnosis, it is important to recognize there are multiple non-disease-related factors that also contribute to delays. Evidence-based approaches to reduce outcome disparities in pediatric cancer likely need to start in the primary care setting where timeliness of diagnosis can be addressed

    Validation of a modified Berger HIV stigma scale for use among patients with hepatitis C virus (HCV) infection.

    No full text
    BACKGROUND:Stigma around hepatitis C virus (HCV) infection is an important and understudied barrier to HCV prevention, treatment, and elimination. To date, no validated instrument exists to measure patients' experiences of HCV stigma. This study aimed to revise the Berger (2001) HIV stigma scale and evaluate its psychometric properties among patients with HCV infection. METHODS:The Berger HIV stigma scale was revised to ask about HCV and administered to patients with HCV (n = 270) in Philadelphia, Pennsylvania. Scale reliability was evaluated as internal consistency by calculating Cronbach's alpha. Exploratory factor analysis was performed to evaluate construct validity by comparing item clustering to the Berger HIV stigma scale subscales. Item response theory was employed to further evaluate individual items and to calibrate items for simulated computer adaptive testing sessions in order to identify potential shortened instruments. RESULTS:The revised HCV Stigma Scale was found to have good reliability (α = 0.957). After excluding items for low loadings, the exploratory factor analysis indicated good construct validity with 85% of items loading on pre-defined factors. Analyses strongly suggested the predominance of an underlying unidimensional factor solution, which yielded a 33-item scale after items were removed for low loading and differential item functioning. Adaptive simulations indicated that the scale could be substantially shortened without detectable information loss. CONCLUSIONS:The 33-item HCV Stigma Scale showed sufficient reliability and construct validity. We also conducted computer adaptive testing simulations and identified shortened six- and three-item scale alternatives that performed comparably to the original 40-item scale
    corecore