236 research outputs found

    Can adverse maternal and perinatal outcomes be predicted when blood pressure becomes elevated? Secondary analyses from the CHIPS (Control of Hypertension In Pregnancy Study) randomized controlled trial.

    Get PDF
    INTRODUCTION: For women with chronic or gestational hypertension in CHIPS (Control of Hypertension In Pregnancy Study, NCT01192412), we aimed to examine whether clinical predictors collected at randomization could predict adverse outcomes. MATERIAL AND METHODS: This was a planned, secondary analysis of data from the 987 women in the CHIPS Trial. Logistic regression was used to examine the impact of 19 candidate predictors on the probability of adverse perinatal (pregnancy loss or high level neonatal care for >48 h, or birthweight <10th percentile) or maternal outcomes (severe hypertension, preeclampsia, or delivery at <34 or <37 weeks). A model containing all candidate predictors was used to start the stepwise regression process based on goodness of fit as measured by the Akaike information criterion. For face validity, these variables were forced into the model: treatment group ("less tight" or "tight" control), antihypertensive type at randomization, and blood pressure within 1 week before randomization. Continuous variables were represented continuously or dichotomized based on the smaller p-value in univariate analyses. An area-under-the-receiver-operating-curve (AUC ROC) of ≥0.70 was taken to reflect a potentially useful model. RESULTS: Point estimates for AUC ROC were <0.70 for all but severe hypertension (0.70, 95% CI 0.67-0.74) and delivery at <34 weeks (0.71, 95% CI 0.66-0.75). Therefore, no model warranted further assessment of performance. CONCLUSIONS: CHIPS data suggest that when women with chronic hypertension develop an elevated blood pressure in pregnancy, or formerly normotensive women develop new gestational hypertension, maternal and current pregnancy clinical characteristics cannot predict adverse outcomes in the index pregnancy

    Serial interferon-gamma release assays during treatment of active tuberculosis in young adults

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The role of interferon-γ release assay (IGRA) in monitoring responses to anti-tuberculosis (TB) treatment is not clear. We evaluated the results of the QuantiFERON-TB Gold In-tube (QFT-GIT) assay over time during the anti-TB treatment of adults with no underlying disease.</p> <p>Methods</p> <p>We enrolled soldiers who were newly diagnosed with active TB and admitted to the central referral military hospital in South Korea between May 1, 2008 and September 30, 2009. For each participant, we preformed QFT-GIT assay before treatment (baseline) and at 1, 3, and 6 months after initiating anti-TB medication.</p> <p>Results</p> <p>Of 67 eligible patients, 59 (88.1%) completed the study protocol. All participants were males who were human immunodeficiency virus (HIV)-negative and had no chronic diseases. Their median age was 21 years (range, 20-48). Initially, 57 (96.6%) patients had positive QFT-GIT results, and 53 (89.8%), 42 (71.2%), and 39 (66.1%) had positive QFT-GIT results at 1, 3, and 6 months, respectively. The IFN-γ level at baseline was 5.31 ± 5.34 IU/ml, and the levels at 1, 3, and 6 months were 3.95 ± 4.30, 1.82 ± 2.14, and 1.50 ± 2.12 IU/ml, respectively. All patients had clinical and radiologic improvements after treatment and were cured. A lower IFN-γ level, C-reactive protein ≥ 3 mg/dl, and the presence of fever (≥ 38.3°C) at diagnosis were associated with negative reversion of the QFT-GIT assay.</p> <p>Conclusion</p> <p>Although the IFN-γ level measured by QFT-GIT assay decreased after successful anti-TB treatment in most participants, less than half of them exhibited QFT-GIT reversion. Thus, the reversion to negativity of the QFT-GIT assay may not be a good surrogate for treatment response in otherwise healthy young patients with TB.</p

    The Cost Implications of Less Tight Versus Tight Control of Hypertension in Pregnancy (CHIPS Trial).

    Get PDF
    The CHIPS randomized controlled trial (Control of Hypertension in Pregnancy Study) found no difference in the primary perinatal or secondary maternal outcomes between planned "less tight" (target diastolic 100 mm Hg) and "tight" (target diastolic 85 mm Hg) blood pressure management strategies among women with chronic or gestational hypertension. This study examined which of these management strategies is more or less costly from a third-party payer perspective. A total of 981 women with singleton pregnancies and nonsevere, nonproteinuric chronic or gestational hypertension were randomized at 14 to 33 weeks to less tight or tight control. Resources used were collected from 94 centers in 15 countries and costed as if the trial took place in each of 3 Canadian provinces as a cost-sensitivity analysis. Eleven hospital ward and 24 health service costs were obtained from a similar trial and provincial government health insurance schedules of medical benefits. The mean total cost per woman-infant dyad was higher in less tight versus tight control, but the difference in mean total cost (DM) was not statistically significant in any province: Ontario (30191.62versus30 191.62 versus 24 469.06; DM 5723,955723, 95% confidence interval, -296 to 12272;P=0.0725);BritishColumbia(12 272; P=0.0725); British Columbia (30 593.69 versus 24776.51;DM24 776.51; DM 5817; 95% confidence interval, -385to385 to 12 349; P=0.0725); or Alberta (31510.72versus31 510.72 versus 25 510.49; DM 6000.23;956000.23; 95% confidence interval, -154 to $12 781; P=0.0637). Tight control may benefit women without increasing risk to neonates (as shown in the main CHIPS trial), without additional (and possibly lower) cost to the healthcare system. CLINICAL TRIAL REGISTRATION: URL: http://www.clinicaltrials.gov. Unique identifier: NCT01192412

    Genome-Wide Transcriptomic Analysis of Intestinal Tissue to Assess the Impact of Nutrition and a Secondary Nematode Challenge in Lactating Rats

    Get PDF
    Gastrointestinal nematode infection is a major challenge to the health and welfare of mammals. Although mammals eventually acquire immunity to nematodes, this breaks down around parturition, which renders periparturient mammals susceptible to re-infection and an infection source for their offspring. Nutrient supplementation reduces the extent of periparturient parasitism, but the underlying mechanisms remain unclear. Here, we use a genome wide approach to assess the effects of protein supplementation on gene expression in the small intestine of periparturient rats following nematode re-infection.The use of a rat whole genome expression microarray (Affymetrix Gene 1.0ST) showed significant differential regulation of 91 genes in the small intestine of lactating rats, re-infected with Nippostrongylus brasiliensis compared to controls; affected functions included immune cell trafficking, cell-mediated responses and antigen presentation. Genes with a previously described role in immune response to nematodes, such as mast cell proteases, and intelectin, and others newly associated with nematode expulsion, such as anterior gradient homolog 2 were identified. Protein supplementation resulted in significant differential regulation of 64 genes; affected functions included protein synthesis, cellular function and maintenance. It increased cell metabolism, evident from the high number of non-coding RNA and the increased synthesis of ribosomal proteins. It regulated immune responses, through T-cell activation and proliferation. The up-regulation of transcription factor forkhead box P1 in unsupplemented, parasitised hosts may be indicative of a delayed immune response in these animals.This study provides the first evidence for nutritional regulation of genes related to immunity to nematodes at the site of parasitism, during expulsion. Additionally it reveals genes induced following secondary parasite challenge in lactating mammals, not previously associated with parasite expulsion. This work is a first step towards defining disease predisposition, identifying markers for nutritional imbalance and developing sustainable measures for parasite control in domestic mammals

    Integrated HIV Testing, Malaria, and Diarrhea Prevention Campaign in Kenya: Modeled Health Impact and Cost-Effectiveness

    Get PDF
    Efficiently delivered interventions to reduce HIV, malaria, and diarrhea are essential to accelerating global health efforts. A 2008 community integrated prevention campaign in Western Province, Kenya, reached 47,000 individuals over 7 days, providing HIV testing and counseling, water filters, insecticide-treated bed nets, condoms, and for HIV-infected individuals cotrimoxazole prophylaxis and referral for ongoing care. We modeled the potential cost-effectiveness of a scaled-up integrated prevention campaign.We estimated averted deaths and disability-adjusted life years (DALYs) based on published data on baseline mortality and morbidity and on the protective effect of interventions, including antiretroviral therapy. We incorporate a previously estimated scaled-up campaign cost. We used published costs of medical care to estimate savings from averted illness (for all three diseases) and the added costs of initiating treatment earlier in the course of HIV disease.Per 1000 participants, projected reductions in cases of diarrhea, malaria, and HIV infection avert an estimated 16.3 deaths, 359 DALYs and 85,113inmedicalcarecosts.EarliercareforHIVinfectedpersonsaddsanestimated82DALYsaverted(toatotalof442),atacostof85,113 in medical care costs. Earlier care for HIV-infected persons adds an estimated 82 DALYs averted (to a total of 442), at a cost of 37,097 (reducing total averted costs to 48,015).Accountingfortheestimatedcampaigncostof48,015). Accounting for the estimated campaign cost of 32,000, the campaign saves an estimated 16,015per1000participants.Inmultivariatesensitivityanalyses,8316,015 per 1000 participants. In multivariate sensitivity analyses, 83% of simulations result in net savings, and 93% in a cost per DALY averted of less than 20.A mass, rapidly implemented campaign for HIV testing, safe water, and malaria control appears economically attractive

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty
    corecore