45 research outputs found

    Infection prevention and control compliance in Tanzanian outpatient facilities: a cross-sectional study with implications for the control of COVID-19.

    Get PDF
    BACKGROUND: As coronavirus disease 2019 (COVID-19) spreads, weak health systems must not become a vehicle for transmission through poor infection prevention and control practices. We assessed the compliance of health workers with infection prevention and control practices relevant to COVID-19 in outpatient settings in Tanzania, before the pandemic. METHODS: This study was based on a secondary analysis of cross-sectional data collected as part of a randomised controlled trial in private for-profit dispensaries and health centres and in faith-based dispensaries, health centres, and hospitals, in 18 regions. We observed provider-patient interactions in outpatient consultation rooms, laboratories, and dressing rooms, and categorised infection prevention and control practices into four domains: hand hygiene, glove use, disinfection of reusable equipment, and waste management. We calculated compliance as the proportion of indications (infection risks) in which a health worker performed a correct action, and examined associations between compliance and health worker and facility characteristics using multilevel mixed-effects logistic regression models. FINDINGS: Between Feb 7 and April 5, 2018, we visited 228 health facilities, and observed at least one infection prevention and control indication in 220 facilities (118 [54%] dispensaries, 66 [30%] health centres, and 36 [16%] hospitals). 18 710 indications were observed across 734 health workers (49 [7%] medical doctors, 214 [29%] assistant medical officers or clinical officers, 106 [14%] nurses or midwives, 126 [17%] clinical assistants, and 238 [32%] laboratory technicians or assistants). Compliance was 6·9% for hand hygiene (n=8655 indications), 74·8% for glove use (n=4915), 4·8% for disinfection of reusable equipment (n=841), and 43·3% for waste management (n=4299). Facility location was not associated with compliance in any of the infection prevention and control domains. Facility level and ownership were also not significantly associated with compliance, except for waste management. For hand hygiene, nurses and midwives (odds ratio 5·80 [95% CI 3·91-8·61]) and nursing and medical assistants (2·65 [1·67-4·20]) significantly outperformed the reference category of assistant medical officers or clinical officers. For glove use, nurses and midwives (10·06 [6·68-15·13]) and nursing and medical assistants (5·93 [4·05-8·71]) also significantly outperformed the reference category. Laboratory technicians performed significantly better in glove use (11·95 [8·98-15·89]), but significantly worse in hand hygiene (0·27 [0·17-0·43]) and waste management (0·25 [0·14-0·44] than the reference category. Health worker age was negatively associated with correct glove use and female health workers were more likely to comply with hand hygiene. INTERPRETATION: Health worker infection prevention and control compliance, particularly for hand hygiene and disinfection, was inadequate in these outpatient settings. Improvements in provision of supplies and health worker behaviours are urgently needed in the face of the current pandemic. FUNDING: UK Medical Research Council, Economic and Social Research Council, Department for International Development, Global Challenges Research Fund, Wellcome Trust

    How much healthcare is wasted? A cross-sectional study of outpatient overprovision in private-for-profit and faith-based health facilities in Tanzania.

    Get PDF
    Overprovision-healthcare whose harm exceeds its benefit-is of increasing concern in low- and middle-income countries, where the growth of the private-for-profit sector may amplify incentives for providing unnecessary care, and achieving universal health coverage will require efficient resource use. Measurement of overprovision has conceptual and practical challenges. We present a framework to conceptualize and measure overprovision, comparing for-profit and not-for-profit private outpatient facilities across 18 of mainland Tanzania's 22 regions. We developed a novel conceptualization of three harms of overprovision: economic (waste of resources), public health (unnecessary use of antimicrobial agents risking development of resistant organisms) and clinical (high risk of harm to individual patients). Standardized patients (SPs) visited 227 health facilities (99 for-profit and 128 not-for-profit) between May 3 and June 12, 2018, completing 909 visits and presenting 4 cases: asthma, non-malarial febrile illness, tuberculosis and upper respiratory tract infection. Tests and treatments prescribed were categorized as necessary or unnecessary, and unnecessary care was classified by type of harm(s). Fifty-three percent of 1995 drugs prescribed and 43% of 891 tests ordered were unnecessary. At the patient-visit level, 81% of SPs received unnecessary care, 67% received care harmful to public health (prescription of unnecessary antibiotics or antimalarials) and 6% received clinically harmful care. Thirteen percent of SPs were prescribed an antibiotic defined by WHO as 'Watch' (high priority for antimicrobial stewardship). Although overprovision was common in all sectors and geographical regions, clinically harmful care was more likely in for-profit than faith-based facilities and less common in urban than rural areas. Overprovision was widespread in both for-profit and not-for-profit facilities, suggesting considerable waste in the private sector, not solely driven by profit. Unnecessary antibiotic or antimalarial prescriptions are of concern for the development of antimicrobial resistance. Option for policymakers to address overprovision includes the use of strategic purchasing arrangements, provider training and patient education

    Effect of a multifaceted intervention to improve clinical quality of care through stepwise certification (SafeCare) in health-care facilities in Tanzania: a cluster-randomised controlled trial.

    Get PDF
    BACKGROUND: Quality of care is consistently shown to be inadequate in health-care settings in many low-income and middle-income countries, including in private facilities, which are rapidly growing in number but often do not have effective quality stewardship mechanisms. The SafeCare programme aims to address this gap in quality of care, using a standards-based approach adapted to low-resource settings, involving assessments, mentoring, training, and access to loans, to improve clinical quality and facility business performance. We assessed the effect of the SafeCare programme on quality of patient care in faith-based and private for-profit facilities in Tanzania. METHODS: In this cluster-randomised controlled trial, health facilities were eligible if they were dispensaries, health centres, or hospitals in the faith-based or private for-profit sectors in Tanzania. We randomly assigned facilities (1:1) using computer-generated stratified randomisation to receive the full SafeCare package (intervention) or an assessment only (control). Implementing staff and participants were masked to outcome measurement and the primary outcomes were measured by fieldworkers who had no knowledge of the study group allocation. The primary outcomes were health worker compliance with infection prevention and control (IPC) practices as measured by observation of provider-patient interactions, and correct case management of undercover standardised patients at endline (after a minimum of 18 months). Analyses were by modified intention to treat. The trial is registered with ISRCTN, ISRCTN93644888. FINDINGS: Between March 7 and Nov 30, 2016, we enrolled and randomly assigned 237 health facilities to the intervention (n=118) or control (n=119). Nine facilities (seven intervention facilities and two control facilities) closed during the trial and were not included in the analysis. We observed 29 608 IPC indications in 5425 provider-patient interactions between Feb 7 and April 5, 2018. Health facilities received visits from 909 standardised patients between May 3 and June 12, 2018. Intervention facilities had a 4·4 percentage point (95% CI 0·9-7·7; p=0.015) higher mean SafeCare standards assessment score at endline than control facilities. However, there was no evidence of a difference in clinical quality between intervention and control groups at endline. Compliance with IPC practices was observed in 8181 (56·9%) of 14 366 indications in intervention facilities and 8336 (54·7%) of 15 242 indications in control facilities (absolute difference 2·2 percentage points, 95% CI -0·2 to -4·7; p=0·071). Correct management occurred in 120 (27·0%) of 444 standardised patients in the intervention group and in 136 (29·2%) of 465 in the control group (absolute difference -2·8 percentage points, 95% CI -8·6 to -3·1; p=0·36). INTERPRETATION: SafeCare did not improve clinical quality as assessed by compliance with IPC practices and correct case management. The absence of effect on clinical quality could reflect a combination of insufficient intervention intensity, insufficient links between structural quality and care processes, scarcity of resources for quality improvement, and inadequate financial and regulatory incentives for improvement. FUNDING: UK Health Systems Research Initiative (Medical Research Council, Economic and Social Research Council, UK Department for International Development, Global Challenges Research Fund, and Wellcome Trust)

    How to do (or not to do) … using the standardized patient method to measure clinical quality of care in LMIC health facilities.

    Get PDF
    Standardized patients (SPs), i.e. mystery shoppers for healthcare providers, are increasingly used as a tool to measure quality of clinical care, particularly in low- and middle-income countries where medical record abstraction is unlikely to be feasible. The SP method allows care to be observed without the provider's knowledge, removing concerns about the Hawthorne effect, and means that providers can be directly compared against each other. However, their undercover nature means that there are methodological and ethical challenges beyond those found in normal fieldwork. We draw on a systematic review and our own experience of implementing such studies to discuss six key steps in designing and executing SP studies in healthcare facilities, which are more complex than those in retail settings. Researchers must carefully choose the symptoms or conditions the SPs will present in order to minimize potential harm to fieldworkers, reduce the risk of detection and ensure that there is a meaningful measure of clinical care. They must carefully define the types of outcomes to be documented, develop the study scripts and questionnaires, and adopt an appropriate sampling strategy. Particular attention is required to ethical considerations and to assessing detection by providers. Such studies require thorough planning, piloting and training, and a dedicated and engaged field team. With sufficient effort, SP studies can provide uniquely rich data, giving insights into how care is provided which is of great value to both researchers and policymakers

    Bioelectrical impedance phase angle in clinical practice: implications for prognosis in stage IIIB and IV non-small cell lung cancer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A frequent manifestation of advanced lung cancer is malnutrition, timely identification and treatment of which can lead to improved patient outcomes. Bioelectrical impedance analysis (BIA) is an easy-to-use and non-invasive technique to evaluate changes in body composition and nutritional status. We investigated the prognostic role of BIA-derived phase angle in advanced non-small cell lung cancer (NSCLC).</p> <p>Methods</p> <p>A case series of 165 stages IIIB and IV NSCLC patients treated at our center. The Kaplan Meier method was used to calculate survival. Cox proportional hazard models were constructed to evaluate the prognostic effect of phase angle, independent of stage at diagnosis and prior treatment history.</p> <p>Results</p> <p>93 were males and 72 females. 61 had stage IIIB disease at diagnosis while 104 had stage IV. The median phase angle was 5.3 degrees (range = 2.9 – 8). Patients with phase angle <= 5.3 had a median survival of 7.6 months (95% CI: 4.7 to 9.5; n = 81), while those with > 5.3 had 12.4 months (95% CI: 10.5 to 18.7; n = 84); (p = 0.02). After adjusting for age, stage at diagnosis and prior treatment history we found that every one degree increase in phase angle was associated with a relative risk of 0.79 (95% CI: 0.64 to 0.97, P = 0.02).</p> <p>Conclusion</p> <p>We found BIA-derived phase angle to be an independent prognostic indicator in patients with stage IIIB and IV NSCLC. Nutritional interventions targeted at improving phase angle could potentially lead to an improved survival in patients with advanced NSCLC.</p

    Bioelectrical impedance phase angle as a prognostic indicator in breast cancer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Bioelectrical impedance analysis (BIA) is an easy-to-use, non-invasive, and reproducible technique to evaluate changes in body composition and nutritional status. Phase angle, determined by bioelectrical impedance analysis (BIA), detects changes in tissue electrical properties and has been hypothesized to be a marker of malnutrition. Since malnutrition can be found in patients with breast cancer, we investigated the prognostic role of phase angle in breast cancer.</p> <p>Methods</p> <p>We evaluated a case series of 259 histologically confirmed breast cancer patients treated at Cancer Treatment Centers of America. Kaplan Meier method was used to calculate survival. Cox proportional hazard models were constructed to evaluate the prognostic effect of phase angle independent of stage at diagnosis and prior treatment history. Survival was calculated as the time interval between the date of first patient visit to the hospital and the date of death from any cause or date of last contact/last known to be alive.</p> <p>Results</p> <p>Of 259 patients, 81 were newly diagnosed at our hospital while 178 had received prior treatment elsewhere. 56 had stage I disease at diagnosis, 110 had stage II, 46 had stage III and 34 had stage IV. The median age at diagnosis was 49 years (range 25 – 74 years). The median phase angle score was 5.6 (range = 1.5 – 8.9). Patients with phase angle <= 5.6 had a median survival of 23.1 months (95% CI: 14.2 to 31.9; n = 129), while those > 5.6 had 49.9 months (95% CI: 35.6 to 77.8; n = 130); the difference being statistically significant (p = 0.031). Multivariate Cox modeling, after adjusting for stage at diagnosis and prior treatment history found that every one unit increase in phase angle score was associated with a relative risk of 0.82 (95% CI: 0.68 to 0.99, P = 0.041). Stage at diagnosis (p = 0.006) and prior treatment history (p = 0.001) were also predictive of survival independent of each other and phase angle.</p> <p>Conclusion</p> <p>This study demonstrates that BIA-derived phase angle is an independent prognostic indicator in patients with breast cancer. Nutritional interventions targeted at improving phase angle could potentially lead to an improved survival in patients with breast cancer.</p

    RNA-seq Analysis Reveals That an ECF σ Factor, AcsS, Regulates Achromobactin Biosynthesis in Pseudomonas syringae pv. syringae B728a

    Get PDF
    Iron is an essential micronutrient for Pseudomonas syringae pv. syringae strain B728a and many other microorganisms; therefore, B728a has evolved methods of iron acquirement including the use of iron-chelating siderophores. In this study an extracytoplasmic function (ECF) sigma factor, AcsS, encoded within the achromobactin gene cluster is shown to be a major regulator of genes involved in the biosynthesis and secretion of this siderophore. However, production of achromobactin was not completely abrogated in the deletion mutant, implying that other regulators may be involved such as PvdS, the sigma factor that regulates pyoverdine biosynthesis. RNA-seq analysis identified 287 genes that are differentially expressed between the AcsS deletion mutant and the wild type strain. These genes are involved in iron response, secretion, extracellular polysaccharide production, and cell motility. Thus, the transcriptome analysis supports a role for AcsS in the regulation of achromobactin production and the potential activity of both AcsS and achromobactin in the plant-associated lifestyle of strain B728a

    Enhanced Characterization of the Smell of Death by Comprehensive Two-Dimensional Gas Chromatography-Time-of-Flight Mass Spectrometry (GCxGC-TOFMS)

    Get PDF
    Soon after death, the decay process of mammalian soft tissues begins and leads to the release of cadaveric volatile compounds in the surrounding environment. The study of postmortem decomposition products is an emerging field of study in forensic science. However, a better knowledge of the smell of death and its volatile constituents may have many applications in forensic sciences. Domestic pigs are the most widely used human body analogues in forensic experiments, mainly due to ethical restrictions. Indeed, decomposition trials on human corpses are restricted in many countries worldwide. This article reports on the use of comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GCxGC-TOFMS) for thanatochemistry applications. A total of 832 VOCs released by a decaying pig carcass in terrestrial ecosystem, i.e. a forest biotope, were identified by GCxGC-TOFMS. These postmortem compounds belong to many kinds of chemical class, mainly oxygen compounds (alcohols, acids, ketones, aldehydes, esters), sulfur and nitrogen compounds, aromatic compounds such as phenolic molecules and hydrocarbons. The use of GCxGC-TOFMS in study of postmortem volatile compounds instead of conventional GC-MS was successful

    Surgical site infection after gastrointestinal surgery in high-income, middle-income, and low-income countries: a prospective, international, multicentre cohort study

    Get PDF
    Background: Surgical site infection (SSI) is one of the most common infections associated with health care, but its importance as a global health priority is not fully understood. We quantified the burden of SSI after gastrointestinal surgery in countries in all parts of the world. Methods: This international, prospective, multicentre cohort study included consecutive patients undergoing elective or emergency gastrointestinal resection within 2-week time periods at any health-care facility in any country. Countries with participating centres were stratified into high-income, middle-income, and low-income groups according to the UN's Human Development Index (HDI). Data variables from the GlobalSurg 1 study and other studies that have been found to affect the likelihood of SSI were entered into risk adjustment models. The primary outcome measure was the 30-day SSI incidence (defined by US Centers for Disease Control and Prevention criteria for superficial and deep incisional SSI). Relationships with explanatory variables were examined using Bayesian multilevel logistic regression models. This trial is registered with ClinicalTrials.gov, number NCT02662231. Findings: Between Jan 4, 2016, and July 31, 2016, 13 265 records were submitted for analysis. 12 539 patients from 343 hospitals in 66 countries were included. 7339 (58·5%) patient were from high-HDI countries (193 hospitals in 30 countries), 3918 (31·2%) patients were from middle-HDI countries (82 hospitals in 18 countries), and 1282 (10·2%) patients were from low-HDI countries (68 hospitals in 18 countries). In total, 1538 (12·3%) patients had SSI within 30 days of surgery. The incidence of SSI varied between countries with high (691 [9·4%] of 7339 patients), middle (549 [14·0%] of 3918 patients), and low (298 [23·2%] of 1282) HDI (p < 0·001). The highest SSI incidence in each HDI group was after dirty surgery (102 [17·8%] of 574 patients in high-HDI countries; 74 [31·4%] of 236 patients in middle-HDI countries; 72 [39·8%] of 181 patients in low-HDI countries). Following risk factor adjustment, patients in low-HDI countries were at greatest risk of SSI (adjusted odds ratio 1·60, 95% credible interval 1·05–2·37; p=0·030). 132 (21·6%) of 610 patients with an SSI and a microbiology culture result had an infection that was resistant to the prophylactic antibiotic used. Resistant infections were detected in 49 (16·6%) of 295 patients in high-HDI countries, in 37 (19·8%) of 187 patients in middle-HDI countries, and in 46 (35·9%) of 128 patients in low-HDI countries (p < 0·001). Interpretation: Countries with a low HDI carry a disproportionately greater burden of SSI than countries with a middle or high HDI and might have higher rates of antibiotic resistance. In view of WHO recommendations on SSI prevention that highlight the absence of high-quality interventional research, urgent, pragmatic, randomised trials based in LMICs are needed to assess measures aiming to reduce this preventable complication
    corecore