757 research outputs found

    Metabolically healthy and unhealthy obesity: differential effects on myocardial function according to metabolic syndrome, rather than obesity.

    Get PDF
    BACKGROUND: The term 'metabolically healthy obese (MHO)' is distinguished using body mass index (BMI), yet BMI is a poor index of adiposity. Some epidemiological data suggest that MHO carries a lower risk of cardiovascular disease (CVD) or mortality than being normal weight yet metabolically unhealthy. OBJECTIVES: We aimed to undertake a detailed phenotyping of individuals with MHO by using imaging techniques to examine ectopic fat (visceral and liver fat deposition) and myocardial function. We hypothesised that metabolically unhealthy individuals (irrespective of BMI) would have adverse levels of ectopic fat and myocardial dysfunction compared with MHO individuals. SUBJECTS: Individuals were categorised as non-obese or obese (BMI ⩾30 kg m(-2)) and as metabolically healthy or unhealthy according to the presence or absence of metabolic syndrome. METHODS: Sixty-seven individuals (mean±s.d.: age 49±11 years) underwent measurement of (i) visceral, subcutaneous and liver fat using magnetic resonance imaging and proton magnetic resonance spectroscopy, (ii) components of metabolic syndrome, (iii) cardiorespiratory fitness and (iv) indices of systolic and diastolic function using tissue Doppler echocardiography. RESULTS: Cardiorespiratory fitness was similar between all groups; abdominal and visceral fat was highest in the obese groups. Compared with age- and BMI-matched metabolically healthy counterparts, the unhealthy (lean or obese) individuals had higher liver fat and decreased early diastolic strain rate, early diastolic tissue velocity and systolic strain indicative of subclinical systolic and diastolic dysfunction. The magnitude of dysfunction correlated with the number of components of metabolic syndrome but not with BMI or with the degree of ectopic (visceral or liver) fat deposition. CONCLUSIONS: Myocardial dysfunction appears to be related to poor metabolic health rather than simply BMI or fat mass. These data may partly explain the epidemiological evidence on CVD risk relating to the different obesity phenotypes

    Risk factors for exacerbations and pneumonia in patients with chronic obstructive pulmonary disease: a pooled analysis.

    Get PDF
    BACKGROUND: Patients with chronic obstructive pulmonary disease (COPD) are at risk of exacerbations and pneumonia; how the risk factors interact is unclear. METHODS: This post-hoc, pooled analysis included studies of COPD patients treated with inhaled corticosteroid (ICS)/long-acting β2 agonist (LABA) combinations and comparator arms of ICS, LABA, and/or placebo. Backward elimination via Cox's proportional hazards regression modelling evaluated which combination of risk factors best predicts time to first (a) pneumonia, and (b) moderate/severe COPD exacerbation. RESULTS: Five studies contributed: NCT01009463, NCT01017952, NCT00144911, NCT00115492, and NCT00268216. Low body mass index (BMI), exacerbation history, worsening lung function (Global Initiative for Chronic Obstructive Lung Disease [GOLD] stage), and ICS treatment were identified as factors increasing pneumonia risk. BMI was the only pneumonia risk factor influenced by ICS treatment, with ICS further increasing risk for those with BMI <25 kg/m2. The modelled probability of pneumonia varied between 3 and 12% during the first year. Higher exacerbation risk was associated with a history of exacerbations, poorer lung function (GOLD stage), female sex and absence of ICS treatment. The influence of the other exacerbation risk factors was not modified by ICS treatment. Modelled probabilities of an exacerbation varied between 31 and 82% during the first year. CONCLUSIONS: The probability of an exacerbation was considerably higher than for pneumonia. ICS reduced exacerbations but did not influence the effect of risks associated with prior exacerbation history, GOLD stage, or female sex. The only identified risk factor for ICS-induced pneumonia was BMI <25 kg/m2. Analyses of this type may help the development of COPD risk equations

    "Even if the test result is negative, they should be able to tell us what is wrong with us": a qualitative study of patient expectations of rapid diagnostic tests for malaria.

    Get PDF
    BACKGROUND: The debate on rapid diagnostic tests (RDTs) for malaria has begun to shift from whether RDTs should be used, to how and under what circumstances their use can be optimized. This has increased the need for a better understanding of the complexities surrounding the role of RDTs in appropriate treatment of fever. Studies have focused on clinician practices, but few have sought to understand patient perspectives, beyond notions of acceptability. METHODS: This qualitative study aimed to explore patient and caregiver perceptions and experiences of RDTs following a trial to assess the introduction of the tests into routine clinical care at four health facilities in one district in Ghana. Six focus group discussions and one in-depth interview were carried out with those who had received an RDT with a negative test result. RESULTS: Patients had high expectations of RDTs. They welcomed the tests as aiding clinical diagnoses and as tools that could communicate their problem better than they could, verbally. However, respondents also believed the tests could identify any cause of illness, beyond malaria. Experiences of patients suggested that RDTs were adopted into an existing system where patients are both physically and intellectually removed from diagnostic processes and where clinicians retain authority that supersedes tests and their results. In this situation, patients did not feel able to articulate a demand for test-driven diagnosis. CONCLUSIONS: Improvements in communication between the health worker and patient, particularly to explain the capabilities of the test and management of RDT negative cases, may both manage patient expectations and promote patient demand for test-driven diagnoses

    Diagnostic Testing of Pediatric Fevers: Meta-Analysis of 13 National Surveys Assessing Influences of Malaria Endemicity and Source of Care on Test Uptake for Febrile Children under Five Years.

    Get PDF
    In 2010, the World Health Organization revised guidelines to recommend diagnosis of all suspected malaria cases prior to treatment. There has been no systematic assessment of malaria test uptake for pediatric fevers at the population level as countries start implementing guidelines. We examined test use for pediatric fevers in relation to malaria endemicity and treatment-seeking behavior in multiple sub-Saharan African countries in initial years of implementation. We compiled data from national population-based surveys reporting fever prevalence, care-seeking and diagnostic use for children under five years in 13 sub-Saharan African countries in 2009-2011/12 (n = 105,791). Mixed-effects logistic regression models quantified the influence of source of care and malaria endemicity on test use after adjusting for socioeconomic covariates. Results were stratified by malaria endemicity categories: low (PfPR2-10<5%), moderate (PfPR2-10 5-40%), high (PfPR2-10>40%). Among febrile under-fives surveyed, 16.9% (95% CI: 11.8%-21.9%) were tested. Compared to hospitals, febrile children attending non-hospital sources (OR: 0.62, 95% CI: 0.56-0.69) and community health workers (OR: 0.31, 95% CI: 0.23-0.43) were less often tested. Febrile children in high-risk areas had reduced odds of testing compared to low-risk settings (OR: 0.51, 95% CI: 0.42-0.62). Febrile children in least poor households were more often tested than in poorest (OR: 1.63, 95% CI: 1.39-1.91), as were children with better-educated mothers compared to least educated (OR: 1.33, 95% CI: 1.16-1.54). Diagnostic testing of pediatric fevers was low and inequitable at the outset of new guidelines. Greater testing is needed at lower or less formal sources where pediatric fevers are commonly managed, particularly to reach the poorest. Lower test uptake in high-risk settings merits further investigation given potential implications for diagnostic scale-up in these areas. Findings could inform continued implementation of new guidelines to improve access to and equity in point-of-care diagnostics use for pediatric fevers

    Ingestion of micronutrient fortified breakfast cereal has no influence on immune function in healthy children: A randomized controlled trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This study investigated the influence of 2-months ingestion of an "immune" nutrient fortified breakfast cereal on immune function and upper respiratory tract infection (URTI) in healthy children during the winter season.</p> <p>Methods</p> <p>Subjects included 73 children (N = 42 males, N = 31 females) ranging in age from 7 to 13 years (mean ± SD age, 9.9 ± 1.7 years), and 65 completed all phases of the study. Subjects were randomized to one of three groups--low, moderate, or high fortification--with breakfast cereals administered in double blinded fashion. The "medium" fortified cereal contained B-complex vitamins, vitamins A and C, iron, zinc, and calcium, with the addition of vitamin E and higher amounts of vitamins A and C, and zinc in the "high" group. Immune measures included delayed-typed hypersensitivity, global IgG antibody response over four weeks to pneumococcal vaccination, salivary IgA concentration, natural killer cell activity, and granulocyte phagocytosis and oxidative burst activity. Subjects under parental supervision filled in a daily log using URTI symptoms codes.</p> <p>Results</p> <p>Subjects ingested 3337 ± 851 g cereal during the 2-month study, which represented 14% of total diet energy intake and 20-85% of selected vitamins and minerals. Despite significant increases in nutrient intake, URTI rates and pre- to- post-study changes in all immune function measures did not differ between groups.</p> <p>Conclusions</p> <p>Data from this study indicate that ingestion of breakfast cereal fortified with a micronutrient blend for two winter months by healthy, growing children does not significantly influence biomarkers for immune function or URTI rates.</p

    Reduction of anti-malarial consumption after rapid diagnostic tests implementation in Dar es Salaam: a before-after and cluster randomized controlled study

    Get PDF
    ABSTRACT: BACKGROUND: Presumptive treatment of all febrile patients with anti-malarials leads to massive over-treatment. The aim was to assess the effect of implementing malaria rapid diagnostic tests (mRDTs) on prescription of anti-malarials in urban Tanzania. METHODS: The design was a prospective collection of routine statistics from ledger books and cross-sectional surveys before and after intervention in randomly selected health facilities (HF) in Dar es Salaam, Tanzania. The participants were all clinicians and their patients in the above health facilities. The intervention consisted of training and introduction of mRDTs in all three hospitals and in six HF. Three HF without mRDTs were selected as matched controls. The use of routine mRDT and treatment upon result was advised for all patients complaining of fever, including children under five years of age. The main outcome measures were: (1) anti-malarial consumption recorded from routine statistics in ledger books of all HF before and after intervention; (2) anti-malarial prescription recorded during observed consultations in cross-sectional surveys conducted in all HF before and 18 months after mRDT implementation. RESULTS: Based on routine statistics, the amount of artemether-lumefantrine blisters used post-intervention was reduced by 68% (95%CI 57-80) in intervention and 32% (9-54) in control HF. For quinine vials, the reduction was 63% (54-72) in intervention and an increase of 2.49 times (1.62-3.35) in control HF. Before-and-after cross-sectional surveys showed a similar decrease from 75% to 20% in the proportion of patients receiving anti-malarial treatment (Risk ratio 0.23, 95%CI 0.20-0.26). The cluster randomized analysis showed a considerable difference of anti-malarial prescription between intervention HF (22%) and control HF (60%) (Risk ratio 0.30, 95%CI 0.14-0.70). Adherence to test result was excellent since only 7% of negative patients received an anti-malarial. However, antibiotic prescription increased from 49% before to 72% after intervention (Risk ratio 1.47, 95%CI 1.37-1.59). CONCLUSIONS: Programmatic implementation of mRDTs in a moderately endemic area reduced drastically over-treatment with anti-malarials. Properly trained clinicians with adequate support complied with the recommendation of not treating patients with negative results. Implementation of mRDT should be integrated hand-in-hand with training on the management of other causes of fever to prevent irrational use of antibiotic

    Strict adherence to malaria rapid test results might lead to a neglect of other dangerous diseases: a cost benefit analysis from Burkina Faso

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Malaria rapid diagnostic tests (RDTs) have generally been found reliable and cost-effective. In Burkina Faso, the adherence of prescribers to the negative test result was found to be poor. Moreover, the test accuracy for malaria-attributable fever (MAF) is not the same as for malaria infection. This paper aims at determining the costs and benefits of two competing strategies for the management of MAF: presumptive treatment for all or use of RDTs.</p> <p>Methods</p> <p>A cost benefit analysis was carried out using a decision tree, based on data previously obtained, including a randomized controlled trial (RCT) recruiting 852 febrile patients during the dry season and 1,317 in the rainy season. Cost and benefit were calculated using both the real adherence found by the RCT and assuming an ideal adherence of 90% with the negative result. The main parameters were submitted to sensitivity analysis.</p> <p>Results and discussion</p> <p>At real adherence, the test-based strategy was dominated. Assuming ideal adherence, at the value of 525 € for a death averted, the total cost of managing 1,000 febrile children was 1,747 vs. 1,862 € in the dry season and 1,372 vs. 2,138 in the rainy season for the presumptive vs. the test-based strategy. For adults it was 2,728 vs. 1,983 and 2,604 vs. 2,225, respectively. At the subsidized policy adopted locally, assuming ideal adherence, the RDT would be the winning strategy for adults in both seasons and for children in the dry season.</p> <p>At sensitivity analysis, the factors most influencing the choice of the better strategy were the value assigned to a death averted and the proportion of potentially severe NMFI treated with antibiotics in patients with false positive RDT results. The test-based strategy appears advantageous for adults if a satisfactory adherence could be achieved. For children the presumptive strategy remains the best choice for a wide range of scenarios.</p> <p>Conclusions</p> <p>For RDTs to be preferred, a positive result should not influence the decision to treat a potentially severe NMFI with antibiotics. In the rainy season the presumptive strategy always remains the better choice for children.</p
    corecore