52 research outputs found

    Weight-bearing or non-weight-bearing after surgical treatment of ankle fractures: a multicenter randomized controlled trial

    Get PDF
    PURPOSE: The goal of this study was to assess if unprotected weight-bearing as tolerated is superior to protected weight-bearing and unprotected non-weight-bearing in terms of functional outcome and complications after surgical fixation of Lauge-Hansen supination external rotation stage 2-4 ankle fractures. METHODS: A multicentered randomized controlled trial was conducted in patients ranging from 18 to 65 years of age without severe comorbidities. Patients were randomized to unprotected non-weight-bearing, protected weight-bearing, and unprotected weight-bearing as tolerated. The primary endpoint of the study was the Olerud Molander Ankle Score (OMAS) 12 weeks after randomization. The secondary endpoints were health-related quality of life using the SF-36v2, time to return to work, time to return to sports, and the number of complications. RESULTS: The trial was terminated early as advised by the Data and Safety Monitoring Board after interim analysis. A total of 115 patients were randomized. The O'Brien-Fleming threshold for statistical significance for this interim analysis was 0.008 at 12 weeks. The OMAS was higher in the unprotected weight-bearing group after 6 weeks c(61.2 ± 19.0) compared to the protected weight-bearing (51.8 ± 20.4) and unprotected non-weight-bearing groups (45.8 ± 22.4) (p = 0.011). All other follow-up time points did not show significant differences between the groups. Unprotected weight-bearing showed a significant earlier return to work (p = 0.028) and earlier return to sports (p = 0.005). There were no differences in the quality of life scores or number of complications. CONCLUSIONS: Unprotected weight-bearing and mobilization as tolerated as postoperative care regimen improved short-term functional outcomes and led to earlier return to work and sports, yet did not result in an increase of complications

    Performance and long-term stability of the barley hordothionin gene in multiple transgenic apple lines

    Get PDF
    Introduction of sustainable scab resistance in elite apple cultivars is of high importance for apple cultivation when aiming at reducing the use of chemical crop protectants. Genetic modification (GM) allows the rapid introduction of resistance genes directly into high quality apple cultivars. Resistance genes can be derived from apple itself but genetic modification also opens up the possibility to use other, non-host resistance genes. A prerequisite for application is the long-term performance and stability of the gene annex trait in the field. For this study, we produced and selected a series of transgenic apple lines of two cultivars, i.e. ‘Elstar’ and ‘Gala’ in which the barley hordothionin gene (hth) was introduced. After multiplication, the GM hth-lines, non-GM susceptible and resistant controls and GM non-hth controls were planted in a random block design in a field trial in 40 replicates. Scab resistance was monitored after artificial inoculation (first year) and after natural infection (subsequent years). After the trial period, the level of expression of the hth gene was checked by quantitative RT-PCR. Four of the six GM hth apple lines proved to be significantly less susceptible to apple scab and this trait was found to be stable for the entire 4-year period. Hth expression at the mRNA level was also stable

    The ethics of ‘Trials within Cohorts’ (TwiCs): 2nd international symposium - London, UK. 7-8 November 2016

    Get PDF
    On 7-8 th November 2016, 60 people with an interest in the ‘ Trials within Cohorts ’ (TwiCs) approach for randomised controlled trial design met in London. The purpose of this 2 nd TwiCs international symposium was to share perspectives and experiences on ethical aspects of the TwiCs design, discuss how TwiCs relate to the current ethical frame- work, provide a forum in which to discuss and debate ethical issues and identify future directions for conceptual and empirical research. The symposium was supported by the Wellcome Trust and the NIHR CLAHRC Yorkshire and Humber and organised by members of the TwiCs network led by Clare Relton and attended by people from the UK, the Netherlands, Norway, Canada and USA. The two-day sympo- sium enabled an international group to meet and share experiences of the TwiCs design (also known as the ‘ cohort multiple RCT design ’ ), and to discuss plans for future research. Over the two days, invited plenary talks were interspersed by discussions, posters and mini pre- sentations from bioethicists, triallists and health research regulators. Key findings of the symposium were: (1) It is possible to make a compelling case to ethics committees that TwiCs designs are ap- propriate and ethical; (2) The importance of wider considerations around the ethics of inefficient trial designs; and (3) some questions about the ethical requirements for content and timing of informed consent for a study using the TwiCs design need to be decided on a case-by-case basis

    Automated Detection of External Ventricular and Lumbar Drain-Related Meningitis Using Laboratory and Microbiology Results and Medication Data

    Get PDF
    OBJECTIVE: Monitoring of healthcare-associated infection rates is important for infection control and hospital benchmarking. However, manual surveillance is time-consuming and susceptible to error. The aim was, therefore, to develop a prediction model to retrospectively detect drain-related meningitis (DRM), a frequently occurring nosocomial infection, using routinely collected data from a clinical data warehouse. METHODS: As part of the hospital infection control program, all patients receiving an external ventricular (EVD) or lumbar drain (ELD) (2004 to 2009; n = 742) had been evaluated for the development of DRM through chart review and standardized diagnostic criteria by infection control staff; this was the reference standard. Children, patients dying <24 hours after drain insertion or with <1 day follow-up and patients with infection at the time of insertion or multiple simultaneous drains were excluded. Logistic regression was used to develop a model predicting the occurrence of DRM. Missing data were imputed using multiple imputation. Bootstrapping was applied to increase generalizability. RESULTS: 537 patients remained after application of exclusion criteria, of which 82 developed DRM (13.5/1000 days at risk). The automated model to detect DRM included the number of drains placed, drain type, blood leukocyte count, C-reactive protein, cerebrospinal fluid leukocyte count and culture result, number of antibiotics started during admission, and empiric antibiotic therapy. Discriminatory power of this model was excellent (area under the ROC curve 0.97). The model achieved 98.8% sensitivity (95% CI 88.0% to 99.9%) and specificity of 87.9% (84.6% to 90.8%). Positive and negative predictive values were 56.9% (50.8% to 67.9%) and 99.9% (98.6% to 99.9%), respectively. Predicted yearly infection rates concurred with observed infection rates. CONCLUSION: A prediction model based on multi-source data stored in a clinical data warehouse could accurately quantify rates of DRM. Automated detection using this statistical approach is feasible and could be applied to other nosocomial infections

    Mapping Cumulative Environmental Risks: Examples from The EU NoMiracle Project

    Get PDF
    We present examples of cumulative chemical risk mapping methods developed within the NoMiracle project. The different examples illustrate the application of the concentration addition (CA) approach to pesticides at different scale, the integration in space of cumulative risks to individual organisms under the CA assumption, and two techniques to (1) integrate risks using data-driven, parametric statistical methods, and (2) cluster together areas with similar occurrence of different risk factors, respectively. The examples are used to discuss some general issues, particularly on the conventional nature of cumulative risk maps, and may provide some suggestions for the practice of cumulative risk mapping

    Functional analysis and expression profiling of HcrVf1 and HcrVf2 for development of scab resistant cisgenic and intragenic apples

    Get PDF
    Apple scab resistance genes, HcrVf1 and HcrVf2, were isolated including their native promoter, coding and terminator sequences. Two fragment lengths (short and long) of the native gene promoters and the strong apple rubisco gene promoter (PMdRbc) were used for both HcrVf genes to test their effect on expression and phenotype. The scab susceptible cultivar ‘Gala’ was used for plant transformations and after selection of transformants, they were micrografted onto apple seedling rootstocks for scab disease tests. Apple transformants were also tested for HcrVf expression by quantitative RT-PCR (qRT-PCR). For HcrVf1 the long native promoter gave significantly higher expression that the short one; in case of HcrVf2 the difference between the two was not significant. The apple rubisco gene promoter proved to give the highest expression of both HcrVf1 and HcrVf2. The top four expanding leaves were used initially for inoculation with monoconidial isolate EU-B05 which belongs to race 1 of V. inaequalis. Later six other V. inaequalis isolates were used to study the resistance spectra of the individual HcrVf genes. The scab disease assays showed that HcrVf1 did not give resistance against any of the isolates tested regardless of the expression level. The HcrVf2 gene appeared to be the only functional gene for resistance against Vf avirulent isolates of V. inaequalis. HcrVf2 did not provide any resistance to Vf virulent strains, even not in case of overexpression. In conclusion, transformants carrying the apple-derived HcrVf2 gene in a cisgenic as well as in an intragenic configuration were able to reach scab resistance levels comparable to the Vf resistant control cultivar obtained by classical breeding, cv. ‘Santana’

    Trait anxiety mediates the effect of stress exposure on post-traumatic stress disorder and depression risk in cardiac surgery patients

    Full text link
    BACKGROUND: Post-traumatic stress disorder (PTSD) and depression are common after cardiac surgery. Lifetime stress exposure and personality traits may influence the development of these psychiatric conditions. METHODS: Self-reported rates of PTSD and depression and potential determinants (i.e., trait anxiety and stress exposure) were established 1.5 to 4 years after cardiac surgery. Data was available for 1125 out of 1244 (90.4%) participants. Multivariable linear regressions were conducted to investigate mediating and/or moderating effects of trait anxiety on the relationship between stress exposure, and PTSD and depression. Pre-planned subgroup analyses were performed for both sexes. RESULTS: PTSD and depression symptoms were present in 10.2% and 13.1% of the participants, respectively. Trait anxiety was a full mediator of the association between stress exposure and depression in both the total cohort and female and male subgroups. Moreover, trait anxiety partially mediated the relationship between stress exposure and PTSD in the full cohort and the male subgroup, whereas trait anxiety fully mediated this relationship in female patients. Trait anxiety did not play a moderating role in the total patient sample, nor after stratification on gender. LIMITATIONS: The unequal distribution of male (78%) and female patients (22%) might limit the generalizability of our findings. Furthermore, risk factors were investigated retrospectively and with variable follow-up time. CONCLUSIONS: In cardiac surgery patients, trait anxiety was found to be an important mediator of postoperative PTSD and depression. Prospective research is necessary to verify whether these factors are reliable screening measures of individuals' vulnerability for psychopathology development after cardiac surgery

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological systematic review of health technology assessments

    Get PDF
    Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests
    corecore