236 research outputs found
Getting science to the citizen : 'food addiction' at the British Science Festival as a case study of interactive public engagement with high profile scientific controversy
Non peer reviewedPublisher PD
Serial interferon-gamma release assays during treatment of active tuberculosis in young adults
<p>Abstract</p> <p>Background</p> <p>The role of interferon-γ release assay (IGRA) in monitoring responses to anti-tuberculosis (TB) treatment is not clear. We evaluated the results of the QuantiFERON-TB Gold In-tube (QFT-GIT) assay over time during the anti-TB treatment of adults with no underlying disease.</p> <p>Methods</p> <p>We enrolled soldiers who were newly diagnosed with active TB and admitted to the central referral military hospital in South Korea between May 1, 2008 and September 30, 2009. For each participant, we preformed QFT-GIT assay before treatment (baseline) and at 1, 3, and 6 months after initiating anti-TB medication.</p> <p>Results</p> <p>Of 67 eligible patients, 59 (88.1%) completed the study protocol. All participants were males who were human immunodeficiency virus (HIV)-negative and had no chronic diseases. Their median age was 21 years (range, 20-48). Initially, 57 (96.6%) patients had positive QFT-GIT results, and 53 (89.8%), 42 (71.2%), and 39 (66.1%) had positive QFT-GIT results at 1, 3, and 6 months, respectively. The IFN-γ level at baseline was 5.31 ± 5.34 IU/ml, and the levels at 1, 3, and 6 months were 3.95 ± 4.30, 1.82 ± 2.14, and 1.50 ± 2.12 IU/ml, respectively. All patients had clinical and radiologic improvements after treatment and were cured. A lower IFN-γ level, C-reactive protein ≥ 3 mg/dl, and the presence of fever (≥ 38.3°C) at diagnosis were associated with negative reversion of the QFT-GIT assay.</p> <p>Conclusion</p> <p>Although the IFN-γ level measured by QFT-GIT assay decreased after successful anti-TB treatment in most participants, less than half of them exhibited QFT-GIT reversion. Thus, the reversion to negativity of the QFT-GIT assay may not be a good surrogate for treatment response in otherwise healthy young patients with TB.</p
Can adverse maternal and perinatal outcomes be predicted when blood pressure becomes elevated? Secondary analyses from the CHIPS (Control of Hypertension In Pregnancy Study) randomized controlled trial.
INTRODUCTION: For women with chronic or gestational hypertension in CHIPS (Control of Hypertension In Pregnancy Study, NCT01192412), we aimed to examine whether clinical predictors collected at randomization could predict adverse outcomes. MATERIAL AND METHODS: This was a planned, secondary analysis of data from the 987 women in the CHIPS Trial. Logistic regression was used to examine the impact of 19 candidate predictors on the probability of adverse perinatal (pregnancy loss or high level neonatal care for >48 h, or birthweight <10th percentile) or maternal outcomes (severe hypertension, preeclampsia, or delivery at <34 or <37 weeks). A model containing all candidate predictors was used to start the stepwise regression process based on goodness of fit as measured by the Akaike information criterion. For face validity, these variables were forced into the model: treatment group ("less tight" or "tight" control), antihypertensive type at randomization, and blood pressure within 1 week before randomization. Continuous variables were represented continuously or dichotomized based on the smaller p-value in univariate analyses. An area-under-the-receiver-operating-curve (AUC ROC) of ≥0.70 was taken to reflect a potentially useful model. RESULTS: Point estimates for AUC ROC were <0.70 for all but severe hypertension (0.70, 95% CI 0.67-0.74) and delivery at <34 weeks (0.71, 95% CI 0.66-0.75). Therefore, no model warranted further assessment of performance. CONCLUSIONS: CHIPS data suggest that when women with chronic hypertension develop an elevated blood pressure in pregnancy, or formerly normotensive women develop new gestational hypertension, maternal and current pregnancy clinical characteristics cannot predict adverse outcomes in the index pregnancy
The Cost Implications of Less Tight Versus Tight Control of Hypertension in Pregnancy (CHIPS Trial).
The CHIPS randomized controlled trial (Control of Hypertension in Pregnancy Study) found no difference in the primary perinatal or secondary maternal outcomes between planned "less tight" (target diastolic 100 mm Hg) and "tight" (target diastolic 85 mm Hg) blood pressure management strategies among women with chronic or gestational hypertension. This study examined which of these management strategies is more or less costly from a third-party payer perspective. A total of 981 women with singleton pregnancies and nonsevere, nonproteinuric chronic or gestational hypertension were randomized at 14 to 33 weeks to less tight or tight control. Resources used were collected from 94 centers in 15 countries and costed as if the trial took place in each of 3 Canadian provinces as a cost-sensitivity analysis. Eleven hospital ward and 24 health service costs were obtained from a similar trial and provincial government health insurance schedules of medical benefits. The mean total cost per woman-infant dyad was higher in less tight versus tight control, but the difference in mean total cost (DM) was not statistically significant in any province: Ontario (24 469.06; DM 296 to 30 593.69 versus 5817; 95% confidence interval, -12 349; P=0.0725); or Alberta (25 510.49; DM 154 to $12 781; P=0.0637). Tight control may benefit women without increasing risk to neonates (as shown in the main CHIPS trial), without additional (and possibly lower) cost to the healthcare system. CLINICAL TRIAL REGISTRATION: URL: http://www.clinicaltrials.gov. Unique identifier: NCT01192412
Genome-Wide Transcriptomic Analysis of Intestinal Tissue to Assess the Impact of Nutrition and a Secondary Nematode Challenge in Lactating Rats
Gastrointestinal nematode infection is a major challenge to the health and welfare of mammals. Although mammals eventually acquire immunity to nematodes, this breaks down around parturition, which renders periparturient mammals susceptible to re-infection and an infection source for their offspring. Nutrient supplementation reduces the extent of periparturient parasitism, but the underlying mechanisms remain unclear. Here, we use a genome wide approach to assess the effects of protein supplementation on gene expression in the small intestine of periparturient rats following nematode re-infection.The use of a rat whole genome expression microarray (Affymetrix Gene 1.0ST) showed significant differential regulation of 91 genes in the small intestine of lactating rats, re-infected with Nippostrongylus brasiliensis compared to controls; affected functions included immune cell trafficking, cell-mediated responses and antigen presentation. Genes with a previously described role in immune response to nematodes, such as mast cell proteases, and intelectin, and others newly associated with nematode expulsion, such as anterior gradient homolog 2 were identified. Protein supplementation resulted in significant differential regulation of 64 genes; affected functions included protein synthesis, cellular function and maintenance. It increased cell metabolism, evident from the high number of non-coding RNA and the increased synthesis of ribosomal proteins. It regulated immune responses, through T-cell activation and proliferation. The up-regulation of transcription factor forkhead box P1 in unsupplemented, parasitised hosts may be indicative of a delayed immune response in these animals.This study provides the first evidence for nutritional regulation of genes related to immunity to nematodes at the site of parasitism, during expulsion. Additionally it reveals genes induced following secondary parasite challenge in lactating mammals, not previously associated with parasite expulsion. This work is a first step towards defining disease predisposition, identifying markers for nutritional imbalance and developing sustainable measures for parasite control in domestic mammals
Integrated HIV Testing, Malaria, and Diarrhea Prevention Campaign in Kenya: Modeled Health Impact and Cost-Effectiveness
Efficiently delivered interventions to reduce HIV, malaria, and diarrhea are essential to accelerating global health efforts. A 2008 community integrated prevention campaign in Western Province, Kenya, reached 47,000 individuals over 7 days, providing HIV testing and counseling, water filters, insecticide-treated bed nets, condoms, and for HIV-infected individuals cotrimoxazole prophylaxis and referral for ongoing care. We modeled the potential cost-effectiveness of a scaled-up integrated prevention campaign.We estimated averted deaths and disability-adjusted life years (DALYs) based on published data on baseline mortality and morbidity and on the protective effect of interventions, including antiretroviral therapy. We incorporate a previously estimated scaled-up campaign cost. We used published costs of medical care to estimate savings from averted illness (for all three diseases) and the added costs of initiating treatment earlier in the course of HIV disease.Per 1000 participants, projected reductions in cases of diarrhea, malaria, and HIV infection avert an estimated 16.3 deaths, 359 DALYs and 37,097 (reducing total averted costs to 32,000, the campaign saves an estimated 20.A mass, rapidly implemented campaign for HIV testing, safe water, and malaria control appears economically attractive
Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy
Background
A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets.
Methods
Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis.
Results
A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001).
Conclusion
We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty
Recommended from our members
The identification of QTL controlling ergot sclerotia size in hexaploid wheat implicates a role for the Rht dwarfing alleles
The fungal pathogen Claviceps purpurea infects ovaries of a broad range of temperate grasses and cereals, including hexaploid wheat, causing a disease commonly known as ergot. Sclerotia produced in place of seed carry a cocktail of harmful alkaloid compounds that result in a range of symptoms in humans and animals, causing ergotism. Following a field assessment of C. purpurea infection in winter wheat, two varieties ‘Robigus’ and ‘Solstice’ were selected which consistently produced the largest differential effect on ergot sclerotia weights. They were crossed to produce a doubled haploid mapping population, and a marker map, consisting of 714 genetic loci and a total length of 2895 cM was produced. Four ergot reducing QTL were identified using both sclerotia weight and size as phenotypic parameters; QCp.niab.2A and QCp.niab.4B being detected in the wheat variety ‘Robigus’, and QCp.niab.6A and QCp.niab.4D in the variety ‘Solstice’. The ergot resistance QTL QCp.niab.4B and QCp.niab.4D peaks mapped to the same markers as the known reduced height (Rht) loci on chromosomes 4B and 4D, Rht-B1 and Rht-D1, respectively. In both cases, the reduction in sclerotia weight and size was associated with the semi-dwarfing alleles, Rht-B1b from ‘Robigus’ and Rht-D1b from ‘Solstice’. Two-dimensional, two-QTL scans identified significant additive interactions between QTL QCp.niab.4B and QCp.niab.4D, and between QCp.niab.2A and QCp.niab.4B when looking at sclerotia size, but not between QCp.niab.2A and QCp.niab.4D. The two plant height QTL, QPh.niab.4B and QPh.niab.4D, which mapped to the same locations as QCp.niab.4B and QCp.niab.4D, also displayed significant genetic interactions
- …