4 research outputs found

    Evaluation of the weekly disease surveillance system in Matabeleland South Province, Zimbabwe, 2018

    Get PDF
    Background: The weekly disease surveillance system (WDSS) acts as an early warning of potential threats to public health. In 2018, the reporting rates in Matabeleland South Province were below the 100% target, with overall timeliness of 61.7% and completeness of 67.3%. Low reporting rates may imply late detection of outbreaks in the province. The study was conducted to evaluate the WDSS in Matabeleland South province. Methods: We conducted a descriptive cross sectional-study using updated Centers for Disease Control guidelines for evaluating public health surveillance systems. Interviewer administered questionnaires and key informant interviews were used to collect data from the health workers. Resource availability was assessed using checklists. Epi Info 7TM was used to generate frequencies, medians and proportions. Results: Fifty health workers were interviewed, 28 (56%) of whom were females. The majority of the health workers 41 (82%) were nurses. Thirty-two (64%) respondents knew the timelines for submission of data to the next level whilst only 16 (32%) knew the objectives of the WDSS. Eight (16%) respondents were trained on operating the WDSS. Forty-two (84%) respondents reported analyzing the information of the WDSS and willingness to continue participating in the WDSS was indicated by 46 (92%) respondents. Six (85%) health facilities indicated experiencing problems with the District Health Information System. Conclusion: The WDSS was found to be simple, acceptable and flexible. However, it was unstable and untimely. We recommend training of health care workers on the Integrated Disease Surveillance and Response in the province

    Implementation of Antibody Rapid Diagnostic Testing versus Real-Time Reverse Transcription-PCR Sample Pooling in the Screening of COVID-19: a Case of Different Testing Strategies in Africa

    Get PDF
    COVID-19 has wreaked havoc across the globe, although cases in Africa remain lower than 50 other regions but they are gradually on an upward trajectory. To date, COVID-19 cases have 51 been reported in 54 countries. However, due to limited SARS-COV-2 rRT-PCR testing 52 capacity and scarcity of testing reagents, it is probable that the total number of cases could 53 far exceed published statistics. In this viewpoint, using Ghana, Malawi, South Africa and 54 Zimbabwe as examples of countries that have implemented different testing strategies, we 55 argue that the implementation of sample pooling for rRT-PCR over antibody rapid diagnostic 56 testing could have a greater impact in assessing disease burden. Sample pooling offers huge 57 advantages compared to single test rRT-PCR, as it lowers experimental costs, personnel 58 time, reduces burnout and analytical run-times. Africa is already strained in terms of testing 59 resources for COVID-19, hence cheaper alternative ways need to be implemented to 60 conserve resources, maximise on mass testing and reduce transmission in the wider 61 population

    Field-based Evaluation of Malaria Outbreak Detection & Response, Mudzi and Goromonzi

    Get PDF
    ObjectiveTo conduct a field-based assessment of the malaria outbreak surveillance system in Mashonaland East, Zimbabwe.IntroductionInfectious disease outbreaks, such as the Ebola outbreak in West Africa, highlight the need for surveillance systems to quickly detect outbreaks and provide data to prevent future pandemics.1–3 The World Health Organization (WHO) developed the Joint External Evaluation (JEE) tool to conduct country-level assessments of surveillance capacity.4 However, considering that outbreaks begin and are first detected at the local level, national-level evaluations may fail to identify capacity improvements for outbreak detection. The gaps in local surveillance system processes illuminate a need for investment in on-the-ground surveillance improvements that may be lower cost than traditional surveillance improvement initiatives, such as enhanced training or strengthening data transfer mechanisms before building new laboratory facilities.5 To explore this premise, we developed a methodology for assessing surveillance systems with special attention to the local level and applied this methodology to the malaria outbreak surveillance system in Mashonaland East, Zimbabwe.MethodsIn a collaboration between the Zimbabwe Field Epidemiology Training Program and the University of Washington, an interview guide was developed based on the Centers for Disease Control and Prevention’s (CDC) Updated Guidelines for Surveillance Evaluations and WHO’s JEE tool.4,6 The guide was tailored in country with input from key stakeholders from the Ministry of Health and Child Care and National Malaria Control Program. Interview guides included questions focused on outbreak detection, response, and control procedures, and surveillance system attributes (preparedness, data quality, timeliness, stability) and functionality (usefulness). The team utilized the tool to evaluate surveillance capacity in eleven clinics across two malaria-burdened districts of Mashonaland East, Mudzi and Goromonzi. Twenty-one interviews were conducted with key informants from the provincial (n=2), district (n=7), and clinic (n=12) levels. Main themes present in interviews were captured using standard qualitative data analysis methods.ResultsThe majority of key informants interviewed were nurses, nurse aids, or nurse officers (57%, 12/21). This evaluation identified clinic-level surveillance system barriers that may be driving malaria outbreak detection and response challenges. Clinics reported little opportunity for cross-training of staff, with 81% (17/21) mentioning that additional staff training support was needed. Only one clinic (10%, 1/11) had malaria emergency preparedness and response guidelines present, a resource recommended by the National Malaria Control Program for all clinics encountering malaria cases. A third of interviewees (33%, 7/21) reported having a standard protocol for validating malaria case data and 29% (6/21) reported challenges with data quality and validation, such as a duplication of case counts. While the surveillance system at all levels detects malaria outbreaks, clinics experience barriers to timely and reliable reporting of cases and outbreaks to the district level. Stability of resources, including transportation and staff capacity, presented barriers, with half (48%, 10/21) of interviewees reporting that their clinics were under-staffed. Additionally, the assessment revealed that the electronic case reporting system (a WHO-developed SMS application, Frontline) that is used to report malaria cases to the district was not functioning in either district, which was unknown at the provincial and national levels. To detect malaria outbreaks, clinics and districts use graphs showing weekly malaria case counts against threshold limit values (TLVs) based on historic five-year malaria case count averages; however, because TLVs are based on 5-year historic data, they are only relevant for clinics that have been in existence for at least five years. Only 30% (3/10) of interviewees asked about outbreak detection graphs reported that TLV graphs were up-to-date.ConclusionsThis surveillance assessment revealed several barriers to system performance at the clinic-level, including challenges with staff cross-training, data quality of malaria case counts, timeliness of updating outbreak detection graphs, stability of transportation, prevention, treatment, and human resources, and usefulness of TLVs for outbreak detection among new clinics. Strengthening these system barriers may improve staff readiness to detect and respond to malaria outbreaks, resulting in timelier outbreak response and decreased malaria mortality. This evaluation has some limitations. We interviewed key informants from a non-random sample covering 30% of all clinics in Mudzi and Goromonzi districts; thus, barriers identified may not be representative of all clinics in these districts. Secondly, evaluators did not interview individuals who may have been involved in outbreak detection and response but were not present at the clinic when interviews were conducted. Lastly, many of the evaluation indicators were based on self-reported information from key informants. Despite these limitations, convenience sampling is common to public health practice, and we reached a saturation of key informant themes with the 21 key informants included in this evaluation.7 By designing evaluation tools that focus on local-level knowledge and priorities, our assessment approach provides a framework for identifying and addressing gaps that may be overlooked when utilizing multi-national tools that evaluate surveillance capacity and improvement priorities at the national level.References1. World Health Organzation. International Health Regulations - Third Edition. Vol Third. Geneva, Switzerland; 2005. doi:10.1017/CBO9781107415324.004.2. Global Health Security Agenda. Implementing the Global Health Security Agenda: Progress and Impact from U.S. Government Investments.; 2018. https://www.ghsagenda.org/docs/default-source/default-document-library/global-health-security-agenda-2017-progress-and-impact-from-u-s-investments.pdf?sfvrsn=4.3. McNamara LA, Schafer IJ, Nolen LD, et al. Ebola Surveillance — Guinea, Liberia, and Sierra Leone. MMWR Suppl. 2016;65(3):35-43. doi:10.15585/mmwr.su6503a6.4. World Health Organization (WHO). Joint External Evaluation Tool: International Health Regulations (2005). Geneva; 2016. http://apps.who.int/iris/bitstream/10665/204368/1/9789241510172_eng.pdf.5. Groseclose SL, Buckeridge DL. Public Health Surveillance Systems: Recent Advances in Their Use and Evaluation. Annu Rev Public Health. 2017;38(1):57-79. doi:10.1146/annurev-publhealth-031816-044348.6. Centers for Disease Control and Prevention. Updated guidelines for evaluating public health surveillance systems: recommendations from the guidelines working group. MWWR. 2001;50(No. RR-13).7. Dworkin SL. Sample size policy for qualitative studies using in-depth interviews. Arch Sex Behav. 2012;41(6):1319-1320. doi:10.1007/s10508-012-0016-6.
    corecore