39 research outputs found

    A randomized, controlled trial of everolimus-based dual immunosuppression versus standard of care in de novo kidney transplant recipients

    Get PDF
    Kidney transplant recipients receiving calcineurin inhibitor-based immunosuppression incur increased long-term risks of cancer and kidney fibrosis. Switch to mammalian target of rapamycin (mTOR) inhibitors may reduce these risks. Steroid or Cyclosporin Removal After Transplant using Everolimus (SOCRATES), a 36-month, prospective, multinational, open-label, randomized controlled trial for de novo kidney transplant recipients, assessed whether everolimus switch could enable elimination of mycophenolate plus either steroids or CNI without compromising efficacy. Patients received cyclosporin, mycophenolate and steroids for the first 14 days then everolimus with mycophenolate and CNIwithdrawal (CNI- WD); everolimus with mycophenolate and steroid withdrawal (steroid-WD); or cyclosporin, mycophenolate and steroids (control). 126 patients were randomized. The steroid WD arm was terminated prematurely because of excess discontinuations. Mean eGFR at month 12 for CNI-WD versus control was 65.1 ml/ min/1.73 m2 vs. 67.1 ml/min/1.73 m2 by ITT, which met predefined noninferiority criteria (P = 0.026). The CNI-WD group experienced a higher rate of BPAR (31% vs. control 13%, P = 0.048) and showed a trend towards higher composite treatment failure (BPAR, graft loss, death, loss to follow-up). The 12 month results from SOCRATES show noninferiority in eGFR, but a significant excess of acute rejection when everolimus was commenced at week 2 to enable a progressive withdrawal of mycophenolate and cyclosporin in kidney transplant recipients.Steven J. Chadban, Josette Marie Eris, John Kanellis, Helen Pilmore, Po Chang Lee, Soo Kun Lim, Chad Woodcock, Nicol Kurstjens, Graeme Rus

    Effect of intensive structured care on individual blood pressure targets in primary care: Multicentre randomised controlled trial

    Get PDF
    Extent: 16p.Objective: To determine the effectiveness of intensive structured care to optimise blood pressure control based on individual absolute risk targets in primary care. Design: Pragmatic multicentre randomised controlled trial. Setting: General practices throughout Australia, except Northern Territory, 2009-11. Participants: Of 2185 patients from 119 general practices who were eligible for drug treatment for hypertension according to national guidelines 416 (19.0%) achieved their individual blood pressure target during a 28 day run-in period of monotherapy. After exclusions, 1562 participants not at target blood pressure (systolic 150 (SD 17) mm Hg, diastolic 88 (SD 11) mm Hg) were randomised (1:2 ratio) to usual care (n=524) or the intervention (n=1038). Intervention: Computer assisted clinical profiling and risk target setting (all participants) with intensified follow-up and stepwise drug titration (initial angiotensin receptor blocker monotherapy or two forms of combination therapy using angiotensin receptor blockers) for those randomised to the intervention. The control group received usual care. Main outcome measures: The primary outcome was individual blood pressure target achieved at 26 weeks. Secondary outcomes were change in mean sitting systolic and diastolic blood pressure, absolute risk for cardiovascular disease within five years based on the Framingham risk score, and proportion and rate of adverse events. Results: On an intention to treat basis, there was an 8.8% absolute difference in individual blood pressure target achieved at 26 weeks in favour of the intervention group compared with usual care group (358/988 (36.2%) v 138/504 (27.4%)): adjusted relative risk 1.28 (95% confidence interval 1.10 to 1.49, P=0.0013). There was also a 9.5% absolute difference in favour of the intervention group for achieving the classic blood pressure target of ≤140/90 mm Hg (627/988 (63.5%) v 272/504 (54.0%)): adjusted relative risk 1.18 (1.07 to 1.29, P<0.001). The intervention group achieved a mean adjusted reduction in systolic blood pressure of 13.2 mm Hg (95% confidence interval −12.3 to −14.2 mm Hg) and diastolic blood pressure of 7.7 mm Hg (−7.1 to −8.3 mm Hg) v 10.1 mm Hg (−8.8 to 11.3 mm Hg) and 5.5 mm Hg (−4.7 to −6.2 mm Hg) in the usual care group (P<0.001). Among 1141 participants in whom five year absolute cardiovascular risk scores were calculated from baseline to the 26 week follow-up, the reduction in risk scores was greater in the intervention group than usual care group (14.7% (SD 9.3%) to 10.9% (SD 8.0%); difference −3.7% (SD 4.5%) and 15.0% (SD 10.1%) to 12.4% (SD 9.4%); −2.6% (SD 4.5%): adjusted mean difference −1.13% (95% confidence interval −0.69% to −1.63%; P<0.001). Owing to adverse events 82 (7.9%) participants in the intervention group and 10 (1.9%) in the usual care group had their drug treatment modified. Conclusions: In a primary care setting intensive structured care resulted in higher levels of blood pressure control, with clinically lower blood pressure and absolute risk of future cardiovascular events overall and with more people achieving their target blood pressure. An important gap in treatment remains though and applying intensive management and achieving currently advocated risk based blood pressure targets is challenging.Simon Stewart, Melinda J Carrington, Carla H Swemmer, Craig Anderson, Nicol P Kurstjens, John Amerena, Alex Brown, Louise M Burrell, Ferdinandus J de Looze, Mark Harris, Joseph Hung, Henry Krum, Mark Nelson, Markus Schlaich, Nigel P Stocks, Garry L Jennings, on behalf of the VIPER-BP study investigator

    Effects of lay support for pregnant women with social risk factors on infant development and maternal psychological health at 12 months postpartum

    Get PDF
    Background The ELSIPS (Evaluation of Lay Support in Pregnant Women with Social Risk) RCT showed that lay support for women with social risk had a positive effect on maternal mental health and mother-infant bonding. This exploratory study examined whether these observed benefits would impact infant development at 1 year. Methods A sub-sample of women whose infants were under one year who had participated in the ELSIPS RCT which randomised women to receive either standard care or the services of a Pregnancy Outreach Worker (POW), and who were contactable, were eligible to participate in the follow up. At home visits, the Bayley Scales of Infant Development (3rd Edition) and standardised measures of depression, self efficacy, mind-mindedness and bonding were completed. Results 486 women were eligible for follow up, of whom 154 agreed to participate. 61/273 were successfully followed up in the standard maternity care arm and 51/213 in the POW arm. Women who completed follow up were less depressed and had higher selfefficacy scores at 8–12 weeks postpartum than those who did not complete follow up. There were no significant differences in maternal outcomes, infant cognitive development, receptive communication, expressive communication, fine motor development or social/emotional functioning between groups at 12 month follow up. Infants of mothers who received the POW intervention had significantly better gross motor development than infants whose mothers received standard care (p<0.03)

    Maternal common mental disorders and infant development in Ethiopia : the P-MaMiE Birth Cohort

    Get PDF
    Background: Chronicity and severity of early exposure to maternal common mental disorders (CMD) has been associated with poorer infant development in high-income countries. In low- and middle-income countries (LAMICs), perinatal CMD is inconsistently associated with infant development, but the impact of severity and persistence has not been examined. Methods: A nested population-based cohort of 258 pregnant women was identified from the Perinatal Maternal Mental Disorder in Ethiopia (P-MaMiE) study, and 194 (75.2%) were successfully followed up until the infants were 12 months of age. Maternal CMD was measured in pregnancy and at two and 12 months postnatal using the WHO Self-Reporting Questionnaire, validated for use in this setting. Infant outcomes were evaluated using the Bayley Scales of Infant Development. Results: Antenatal maternal CMD symptoms were associated with poorer infant motor development ( β ^ -0.20; 95% CI: -0.37 to -0.03), but this became non-significant after adjusting for confounders. Postnatal CMD symptoms were not associated with any domain of infant development. There was evidence of a dose-response relationship between the number of time-points at which the mother had high levels of CMD symptoms (SRQ ≥ 6) and impaired infant motor development ( β ^ = -0.80; 95%CI -2.24, 0.65 for ante- or postnatal CMD only, β ^ = -4.19; 95%CI -8.60, 0.21 for ante- and postnatal CMD, compared to no CMD; test-for-trend χ213.08(1), p < 0.001). Although this association became non-significant in the fully adjusted model, the β ^ coefficients were unchanged indicating that the relationship was not confounded. In multivariable analyses, lower socio-economic status and lower infant weight-for-age were associated with significantly lower scores on both motor and cognitive developmental scales. Maternal experience of physical violence was significantly associated with impaired cognitive development. Conclusions: The study supports the hypothesis that it is the accumulation of risk exposures across time rather than early exposure to maternal CMD per se that is more likely to affect child development. Further investigation of the impact of chronicity of maternal CMD upon child development in LAMICs is indicated. In the Ethiopian setting, poverty, interpersonal violence and infant undernutrition should be targets for interventions to reduce the loss of child developmental potential.Peer Reviewe

    The implementation of the Water Framework Directive: a focused comparison of governance arrangements to improve water quality.

    Get PDF
    Contains fulltext : 199699pub.pdf (publisher's version ) (Open Access)110 p

    Assessing non-chemical weeding strategies through mechanistic modelling of blackgrass (Alopecurus myosuroides Huds.) dynamics

    No full text
    Because of environmental and health safety issues, it is necessary to develop strategies that do not rely on herbicides to manage weeds. Introducing temporary grassland into annual crop rotations and mechanical weeding are the two main features that are frequently used in integrated and organic cropping systems for this purpose. To evaluate the contribution of these two factors in interaction with other cropping system components and environmental conditions, the present study updated an existing biophysical model (i.e. AlomySys) that quantifies the effects of cropping system on weed dynamics. Based on previous experiments, new sub-models were built to describe the effects on plant survival and growth reduction of mechanical weeding resulting from weed seedling uprooting and covering by soil, and those of grassland mowing resulting from tiller destruction. Additional modifications described the effect of the multi-year crop canopy of grassland on weed survival, growth, development and seed return to the soil. The improved model was used to evaluate the weed dynamics over 27 years in the conventional herbicide-based cropping system most frequently observed in farm surveys (i.e. oilseed rape/winter wheat/winter barley rotation with superficial tillage) and then to test prospective non-chemical scenarios. Preliminary simulations tested a large range of mechanical weeding and mowing strategies, varying operation frequencies, dates and, in the case of mechanical weeding, characteristics (i.e. tool, working depth, tractor speed). For mechanical weeding soon after sowing, harrowing was better than hoeing for controlling weed seed production. The later the operation, the more efficient the hoeing and the less efficient the harrowing. Tractor speed had little influence. Increasing tilling depth increased plant mortality but increased weed seed production because of additional seed germination triggering by the weeding tool. Decreasing the interrow width for hoeing was nefarious for weed control. The best combinations were triple hoeing in oilseed rape and sextuple harrowing in cereals. The best mowing strategy was mowing thrice, every 4–6 weeks, starting in mid-May. The best individual options were combined, simulated over 27 years and compared to the herbicide-based reference system. If herbicide applications were replaced solely by mechanical weeding, blackgrass infestation could not be satisfactorily controlled. If a three-year lucerne was introduced into the rotation, weed infestations were divided by ten. Replacing chisel by mouldboard ploughing before winter wheat reduced weed infestations at short, medium and long term to a level comparable to the herbicide-based reference system

    Clinical decision support system supported interventions in hospitalized older patients:a matter of natural course and adequate timing

    Get PDF
    Background: Drug-related problems (DRPs) and potentially inappropriate prescribing (PIP) are associated with adverse patient and health care outcomes. In the setting of hospitalized older patients, Clinical Decision Support Systems (CDSSs) could reduce PIP and therefore improve clinical outcomes. However, prior research showed a low proportion of adherence to CDSS recommendations by clinicians with possible explanatory factors such as little clinical relevance and alert fatigue. Objective:To investigate the use of a CDSS in a real-life setting of hospitalized older patients. We aim to (I) report the natural course and interventions based on the top 20 rule alerts (the 20 most frequently generated alerts per clinical rule) of generated red CDSS alerts (those requiring action) over time from day 1 to 7 of hospitalization; and (II) to explore whether an optimal timing can be defined (in terms of day per rule). Methods: All hospitalized patients aged ≥ 60 years, admitted to Zuyderland Medical Centre (the Netherlands) were included. The evaluation of the CDSS was investigated using a database used for standard care. Our CDSS was run daily and was evaluated on day 1 to 7 of hospitalization. We collected demographic and clinical data, and moreover the total number of CDSS alerts; the total number of top 20 rule alerts; those that resulted in an action by the pharmacist and the course of outcome of the alerts on days 1 to 7 of hospitalization.Results: In total 3574 unique hospitalized patients, mean age 76.7 (SD 8.3) years and 53% female, were included. From these patients, in total 8073 alerts were generated; with the top 20 of rule alerts we covered roughly 90% of the total. For most rules in the top 20 the highest percentage of resolved alerts lies somewhere between day 4 and 5 of hospitalization, after which there is equalization or a decrease. Although for some rules, there is a gradual increase in resolved alerts until day 7. The level of resolved rule alerts varied between the different clinical rules; varying from &gt; 50–70% (potassium levels, anticoagulation, renal function) to less than 25%. Conclusion: This study reports the course of the 20 most frequently generated alerts of a CDSS in a setting of hospitalized older patients. We have shown that for most rules, irrespective of an intervention by the pharmacist, the highest percentage of resolved rules is between day 4 and 5 of hospitalization. The difference in level of resolved alerts between the different rules, could point to more or less clinical relevance and advocates further research to explore ways of optimizing CDSSs by adjustment in timing and number of alerts to prevent alert fatigue.</p
    corecore