5,005 research outputs found
Clinical Performance of an Automated Reader in Interpreting Malaria Rapid Diagnostic Tests in Tanzania.
Parasitological confirmation of malaria is now recommended in all febrile patients by the World Health Organization (WHO) to reduce inappropriate use of anti-malarial drugs. Widespread implementation of rapid diagnostic tests (RDTs) is regarded as an effective strategy to achieve this goal. However, the quality of diagnosis provided by RDTs in remote rural dispensaries and health centres is not ideal. Feasible RDT quality control programmes in these settings are challenging. Collection of information regarding diagnostic events is also very deficient in low-resource countries. A prospective cohort of consecutive patients aged more than one year from both genders, seeking routine care for febrile episodes at dispensaries located in the Bagamoyo district of Tanzania, were enrolled into the study after signing an informed consent form. Blood samples were taken for thick blood smear (TBS) microscopic examination and malaria RDT (SD Bioline Malaria Antigen Pf/PanTM (SD RDT)). RDT results were interpreted by both visual interpretation and DekiReaderTM device. Results of visual interpretation were used for case management purposes. Microscopy was considered the "gold standard test" to assess the sensitivity and specificity of the DekiReader interpretation and to compare it to visual interpretation. In total, 1,346 febrile subjects were included in the final analysis. The SD RDT, when used in conjunction with the DekiReader and upon visual interpretation, had sensitivities of 95.3% (95% CI, 90.6-97.7) and 94.7% (95% CI, 89.8--97.3) respectively, and specificities of 94.6% (95% CI, 93.5--96.1) and 95.6% (95% CI, 94.2--96.6), respectively to gold standard. There was a high percentage of overall agreement between the two methods of interpretation. The sensitivity and specificity of the DekiReader in interpretation of SD RDTs were comparable to previous reports and showed high agreement to visual interpretation (>98%). The results of the study reflect the situation in real practice and show good performance characteristics of DekiReader on interpreting malaria RDTs in the hands of local laboratory technicians. They also suggest that a system like this could provide great benefits to the health care system. Further studies to look at ease of use by community health workers, and cost benefit of the system are warranted
The pain experiences of powered wheelchair users
Copyright © 2012 Informa UK, Ltd. This is the author's accepted manuscript. The final published article is available from the link below.Purpose: To explore the experience of pain and discomfort in users of electric-powered indoor/outdoor wheelchairs (EPIOCs) provided by a National Health Service. Methods: EPIOC users receiving their chair between February and November 2002 (N=74) were invited to participate in a telephone questionnaire/interview and 64 (aged 1081 years) agreed. Both specific and open-ended questions examined the presence of pain/discomfort, its severity, minimizing and aggravating factors, particularly in relation to the EPIOC and its use. Results: Most EPIOC users described experiences of pain with 17% reporting severe pain. Over half felt their pain was influenced by the wheelchair and few (25%) considered their chair eased their symptoms. The most common strategy for pain relief was taking medication. Other self-help strategies included changing position, exercise and complementary therapies. Respondents emphasized the provision of backrests, armrests, footrests and cushions which might alleviate or exacerbate pain, highlighting the importance of appropriate assessment for this high dependency group. Conclusions: Users related pain to their underlying medical condition, their wheelchair or a combination of the two. User feedback is essential to ensure that the EPIOC meets health needs with minimal pain. This becomes more important as the health condition of users changes over time
Persistence of the immune response induced by BCG vaccination.
BACKGROUND: Although BCG vaccination is recommended in most countries of the world, little is known of the persistence of BCG-induced immune responses. As novel TB vaccines may be given to boost the immunity induced by neonatal BCG vaccination, evidence concerning the persistence of the BCG vaccine-induced response would help inform decisions about when such boosting would be most effective. METHODS: A randomised control study of UK adolescents was carried out to investigate persistence of BCG immune responses. Adolescents were tested for interferon-gamma (IFN-gamma) response to Mycobacterium tuberculosis purified protein derivative (M.tb PPD) in a whole blood assay before, 3 months, 12 months (n = 148) and 3 years (n = 19) after receiving teenage BCG vaccination or 14 years after receiving infant BCG vaccination (n = 16). RESULTS: A gradual reduction in magnitude of response was evident from 3 months to 1 year and from 1 year to 3 years following teenage vaccination, but responses 3 years after vaccination were still on average 6 times higher than before vaccination among vaccinees. Some individuals (11/86; 13%) failed to make a detectable antigen-specific response three months after vaccination, or lost the response after 1 (11/86; 13%) or 3 (3/19; 16%) years. IFN-gamma response to Ag85 was measured in a subgroup of adolescents and appeared to be better maintained with no decline from 3 to 12 months. A smaller group of adolescents were tested 14 years after receiving infant BCG vaccination and 13/16 (81%) made a detectable IFN-gamma response to M.tb PPD 14 years after infant vaccination as compared to 6/16 (38%) matched unvaccinated controls (p = 0.012); teenagers vaccinated in infancy were 19 times more likely to make an IFN-gamma response of > 500 pg/ml than unvaccinated teenagers. CONCLUSION: BCG vaccination in infancy and adolescence induces immunological memory to mycobacterial antigens that is still present and measurable for at least 14 years in the majority of vaccinees, although the magnitude of the peripheral blood response wanes from 3 months to 12 months and from 12 months to 3 years post vaccination. The data presented here suggest that because of such waning in the response there may be scope for boosting anti-tuberculous immunity in BCG vaccinated children anytime from 3 months post-vaccination. This supports the prime boost strategies being employed for some new TB vaccines currently under development
Comparing research investment to United Kingdom institutions and published outputs for tuberculosis, HIV and malaria: A systematic analysis across 1997-2013
Background: The "Unfinished Agenda" of infectious diseases is of great importance to policymakers and research funding agencies that require ongoing research evidence on their effective management. Journal publications help effectively share and disseminate research results to inform policy and practice. We assess research investments to United Kingdom institutions in HIV, tuberculosis and malaria, and analyse these by numbers of publications and citations and by disease and type of science. Methods: Information on infection-related research investments awarded to United Kingdom institutions across 1997-2010 were sourced from funding agencies and individually categorised by disease and type of science. Publications were sourced from the Scopus database via keyword searches and filtered to include only publications relating to human disease and containing a United Kingdom-based first and/or last author. Data were matched by disease and type of science categories. Investment (United Kingdom pounds) and publications were compared to generate an 'investment per publication' metric; similarly, an 'investment per citation' metric was also developed as a measure of the usefulness of research. Results: Total research investment for all three diseases was £1.4 billion, and was greatest for HIV (£651.4 million), followed by malaria (£518.7 million) and tuberculosis (£239.1 million). There were 17,271 included publications, with 9,322 for HIV, 4,451 for malaria, and 3,498 for tuberculosis. HIV publications received the most citations (254,949), followed by malaria (148,559) and tuberculosis (100,244). According to UK pound per publication, tuberculosis (£50,691) appeared the most productive for investment, compared to HIV (£61,971) and malaria (£94,483). By type of science, public health research was most productive for HIV (£27,296) and tuberculosis (£22,273), while phase I-III trials were most productive for malaria (£60,491). According to UK pound per citation, tuberculosis (£1,797) was the most productive area for investment, compared to HIV (£2,265) and malaria (£2,834). Public health research was the most productive type of science for HIV (£2,265) and tuberculosis (£1,797), whereas phase I-III trials were most productive for malaria (£1,713). Conclusions: When comparing total publications and citations with research investment to United Kingdom institutions, tuberculosis research appears to perform best in terms of efficiency. There were more public health-related publications and citations for HIV and tuberculosis than other types of science. These findings demonstrate the diversity of research funding and outputs, and provide new evidence to inform research investment strategies for policymakers, funders, academic institutions, and healthcare organizations.Infectious Disease Research Networ
Recommended from our members
A systematic review of the impact of stroke on social support and social networks: associated factors and patterns of change
Objective: Identify what factors are associated with functional social support and social network post stroke; explore stroke survivors’ perspectives on what changes occur and how they are perceived.
Data sources: The following electronic databases were systematically searched up to May 2015: Academic Search Complete; CINAHL Plus; E-journals; Health Policy Reference Centre; MEDLINE; PsycARTICLES; PsycINFO; and SocINDEX.
Review methods: PRISMA guidelines were followed in the conduct and reporting of this review. All included studies were critically appraised using the Critical Appraisal Skills Program tools. Meta-ethnographic techniques were used to integrate findings from the qualitative studies. Given the heterogeneous nature of the quantitative studies, data synthesis was narrative.
Results: 70 research reports met the eligibility criteria: 22 qualitative and 48 quantitative reporting on 4,816 stroke survivors. The qualitative studies described a contraction of the social network, with non-kin contact being vulnerable. Although family were more robust network members, significant strain was observed within the family unit. In the quantitative studies, poor functional social support was associated with depression (13/14 studies), reduced quality of life (6/6 studies) and worse physical recovery (2/2 studies). Reduced social network was associated with depression (7/8 studies), severity of disability (2/2 studies) and aphasia (2/2 studies). Although most indicators of social network reduced post stroke (for example, contact with friends, 5/5 studies), the perception of feeling supported remained relatively stable (4/4 studies).
Conclusion: Following a stroke non-kin contact is vulnerable, strain is observed within the family unit, and poor social support is associated with depressive symptoms
Options for early breast cancer follow-up in primary and secondary care : a systematic review
Background
Both incidence of breast cancer and survival have increased in recent years and there is a need to review follow up strategies. This study aims to assess the evidence for benefits of follow-up in different settings for women who have had treatment for early breast cancer.
Method
A systematic review to identify key criteria for follow up and then address research questions. Key criteria were: 1) Risk of second breast cancer over time - incidence compared to general population. 2) Incidence and method of detection of local recurrence and second ipsi and contra-lateral breast cancer. 3) Level 1–4 evidence of the benefits of hospital or alternative setting follow-up for survival and well-being. Data sources to identify criteria were MEDLINE, EMBASE, AMED, CINAHL, PSYCHINFO, ZETOC, Health Management Information Consortium, Science Direct. For the systematic review to address research questions searches were performed using MEDLINE (2011). Studies included were population studies using cancer registry data for incidence of new cancers, cohort studies with long term follow up for recurrence and detection of new primaries and RCTs not restricted to special populations for trials of alternative follow up and lifestyle interventions.
Results
Women who have had breast cancer have an increased risk of a second primary breast cancer for at least 20 years compared to the general population. Mammographically detected local recurrences or those detected by women themselves gave better survival than those detected by clinical examination. Follow up in alternative settings to the specialist clinic is acceptable to women but trials are underpowered for survival.
Conclusions
Long term support, surveillance mammography and fast access to medical treatment at point of need may be better than hospital based surveillance limited to five years but further large, randomised controlled trials are needed
Utilisation of sexual health services by female sex workers in Nepal
Background
The Nepal Demographic Health Survey (NDHS) in 2006 showed that more than half (56%) of the women with sexually transmitted infections (STIs), including HIV, in Nepal sought sexual health services. There is no such data for female sex workers (FSWs) and the limited studies on this group suggest they do not even use routine health services. This study explores FSWs use of sexual health services and the factors associated with their use and non-use of services.
Methods
This study aimed to explore the factors associated with utilisation of sexual health services by FSWs in the Kathmandu Valley of Nepal, and it used a mixed-method
approach consisting of an interviewer administered questionnaire-based survey and in-depth interviews.
Results
The questionnaire survey, completed with 425 FSWs, showed that 90% FSWs self-reported sickness, and (30.8%) reported symptoms of STIs. A quarter (25%) of those reporting STIs had never visited any health facilities especially for sexual health services preferring to use non-governmental clinics (72%), private clinics (50%), hospital (27%)
and health centres (13%). Multiple regression analysis showed that separated, married and street- based FSWs were more likely to seek health services from the clinics or
hospitals. In- depth interviews with 15 FSWs revealed that FSWs perceived that personal, structural and socio-cultural barriers, such as inappropriate clinic opening hours,
discrimination, the judgemental attitude of the service providers, lack of confidentiality, fear of public exposure, and higher fees for the services as barriers to their access and utilisation of sexual health services.
Conclusion
FSWs have limited access to information and to health services, and operate under personal, structural and socio-cultural constraints. The ‘education’ to change individual behaviour, health worker and community perceptions, as well as the training of the health workers, is necessary
Educating novice practitioners to detect elder financial abuse: A randomised controlled trial
© 2014 Harries et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.This article has been made available through the Brunel Open Access Publishing Fund.Background - Health and social care professionals are well positioned to identify and intervene in cases of elder financial abuse. An evidence-based educational intervention was developed to advance practitioners’ decision-making in this domain. The objective was to test the effectiveness of a decision-training educational intervention on novices’ ability to detect elder financial abuse. The research was funded by an E.S.R.C. grant reference RES-189-25-0334.
Methods - A parallel-group, randomised controlled trial was conducted using a judgement analysis approach. Each participant used the World Wide Web to judge case sets at pre-test and post-test. The intervention group was provided with training after pre-test testing, whereas the control group were purely given instructions to continue with the task. 154 pre-registration health and social care practitioners were randomly allocated to intervention (n78) or control (n76). The intervention comprised of written and graphical descriptions of an expert consensus standard explaining how case information should be used to identify elder financial abuse. Participants’ ratings of certainty of abuse occurring (detection) were correlated with the experts’ ratings of the same cases at both stages of testing.
Results - At pre-test, no differences were found between control and intervention on rating capacity. Comparison of mean scores for the control and intervention group at pre-test compared to immediate post-test, showed a statistically significant result. The intervention was shown to have had a positive moderate effect; at immediate post-test, the intervention group’s ratings had become more similar to those of the experts, whereas the control’s capacity did not improve. The results of this study indicate that the decision-training intervention had a positive effect on detection ability.
Conclusions - This freely available, web-based decision-training aid is an effective evidence-based educational resource. Health and social care professionals can use the resource to enhance their ability to detect elder financial abuse. It has been embedded in a web resource at http://www.elderfinancialabuse.co.uk.ESR
Accuracy of Malaria Rapid Diagnostic Tests in Community Studies and their Impact on Treatment of Malaria in an Area with Declining Malaria Burden in North-Eastern Tanzania.
Despite some problems related to accuracy and applicability of malaria rapid diagnostic tests (RDTs), they are currently the best option in areas with limited laboratory services for improving case management through parasitological diagnosis and reducing over-treatment. This study was conducted in areas with declining malaria burden to assess; 1) the accuracy of RDTs when used at different community settings, 2) the impact of using RDTs on anti-malarial dispensing by community-owned resource persons (CORPs) and 3) adherence of CORPs to treatment guidelines by providing treatment based on RDT results. Data were obtained from: 1) a longitudinal study of passive case detection of fevers using CORPs in six villages in Korogwe; and 2) cross-sectional surveys (CSS) in six villages of Korogwe and Muheza districts, north-eastern, Tanzania. Performance of RDTs was compared with microscopy as a gold standard, and factors affecting their accuracy were explored using a multivariate logistic regression model. Overall sensitivity and specificity of RDTs in the longitudinal study (of 23,793 febrile cases; 18,154 with microscopy and RDTs results) were 88.6% and 88.2%, respectively. In the CSS, the sensitivity was significantly lower (63.4%; χ2=367.7, p<0.001), while the specificity was significantly higher (94.3%; χ2=143.1, p<0.001) when compared to the longitudinal study. As determinants of sensitivity of RDTs in both studies, parasite density of<200 asexual parasites/μl was significantly associated with high risk of false negative RDTs (OR≥16.60, p<0.001), while the risk of false negative test was significantly lower among cases with fever (axillary temperature ≥37.5 °C) (OR≤0.63, p≤0.027). The risk of false positive RDT (as a determinant of specificity) was significantly higher in cases with fever compared to afebrile cases (OR≥2.40, p<0.001). Using RDTs reduced anti-malarials dispensing from 98.9% to 32.1% in cases aged ≥5 years. Although RDTs had low sensitivity and specificity, which varied widely depending on fever and parasite density, using RDTs reduced over-treatment with anti-malarials significantly. Thus, with declining malaria prevalence, RDTs will potentially identify majority of febrile cases with parasites and lead to improved management of malaria and non-malaria fevers
Do acute elevations of serum creatinine in primary care engender an increased mortality risk?
Background: The significant impact Acute Kidney Injury (AKI) has on patient morbidity and mortality emphasizes the need for early recognition and effective treatment. AKI presenting to or occurring during hospitalisation has been widely studied but little is known about the incidence and outcomes of patients experiencing acute elevations in serum creatinine in the primary care setting where people are not subsequently admitted to hospital. The aim of this study was to define this incidence and explore its impact on mortality. Methods: The study cohort was identified by using hospital data bases over a six month period. Inclusion criteria: People with a serum creatinine request during the study period, 18 or over and not on renal replacement therapy. The patients were stratified by a rise in serum creatinine corresponding to the Acute Kidney Injury Network (AKIN) criteria for comparison purposes. Descriptive and survival data were then analysed. Ethical approval was granted from National Research Ethics Service (NRES) Committee South East Coast and from the National Information Governance Board. Results: The total study population was 61,432. 57,300 subjects with ‘no AKI’, mean age 64.The number (mean age) of acute serum creatinine rises overall were, ‘AKI 1’ 3,798 (72), ‘AKI 2’ 232 (73), and ‘AKI 3’ 102 (68) which equates to an overall incidence of 14,192 pmp/year (adult). Unadjusted 30 day survival was 99.9% in subjects with ‘no AKI’, compared to 98.6%, 90.1% and 82.3% in those with ‘AKI 1’, ‘AKI 2’ and ‘AKI 3’ respectively. After multivariable analysis adjusting for age, gender, baseline kidney function and co-morbidity the odds ratio of 30 day mortality was 5.3 (95% CI 3.6, 7.7), 36.8 (95% CI 21.6, 62.7) and 123 (95% CI 64.8, 235) respectively, compared to those without acute serum creatinine rises as defined. Conclusions: People who develop acute elevations of serum creatinine in primary care without being admitted to hospital have significantly worse outcomes than those with stable kidney function
- …
