753 research outputs found

    The antimicrobial polymer PHMB enters cells and selectively condenses bacterial chromosomes

    Get PDF
    To combat infection and antimicrobial resistance, it is helpful to elucidate drug mechanism(s) of action. Here we examined how the widely used antimicrobial polyhexamethylene biguanide (PHMB) kills bacteria selectively over host cells. Contrary to the accepted model of microbial membrane disruption by PHMB, we observed cell entry into a range of bacterial species, and treated bacteria displayed cell division arrest and chromosome condensation, suggesting DNA binding as an alternative antimicrobial mechanism. A DNA-level mechanism was confirmed by observations that PHMB formed nanoparticles when mixed with isolated bacterial chromosomal DNA and its effects on growth were suppressed by pairwise combination with the DNA binding ligand Hoechst 33258. PHMB also entered mammalian cells, but was trapped within endosomes and excluded from nuclei. Therefore, PHMB displays differential access to bacterial and mammalian cellular DNA and selectively binds and condenses bacterial chromosomes. Because acquired resistance to PHMB has not been reported, selective chromosome condensation provides an unanticipated paradigm for antimicrobial action that may not succumb to resistance

    A pragmatic cluster randomised trial evaluating three implementation interventions

    Get PDF
    Background Implementation research is concerned with bridging the gap between evidence and practice through the study of methods to promote the uptake of research into routine practice. Good quality evidence has been summarised into guideline recommendations to show that peri-operative fasting times could be considerably shorter than patients currently experience. The objective of this trial was to evaluate the effectiveness of three strategies for the implementation of recommendations about peri-operative fasting. Methods A pragmatic cluster randomised trial underpinned by the PARIHS framework was conducted during 2006 to 2009 with a national sample of UK hospitals using time series with mixed methods process evaluation and cost analysis. Hospitals were randomised to one of three interventions: standard dissemination (SD) of a guideline package, SD plus a web-based resource championed by an opinion leader, and SD plus plan-do-study-act (PDSA). The primary outcome was duration of fluid fast prior to induction of anaesthesia. Secondary outcomes included duration of food fast, patients' experiences, and stakeholders' experiences of implementation, including influences. ANOVA was used to test differences over time and interventions. Results Nineteen acute NHS hospitals participated. Across timepoints, 3,505 duration of fasting observations were recorded. No significant effect of the interventions was observed for either fluid or food fasting times. The effect size was 0.33 for the web-based intervention compared to SD alone for the change in fluid fasting and was 0.12 for PDSA compared to SD alone. The process evaluation showed different types of impact, including changes to practices, policies, and attitudes. A rich picture of the implementation challenges emerged, including inter-professional tensions and a lack of clarity for decision-making authority and responsibility. Conclusions This was a large, complex study and one of the first national randomised controlled trials conducted within acute care in implementation research. The evidence base for fasting practice was accepted by those participating in this study and the messages from it simple; however, implementation and practical challenges influenced the interventions' impact. A set of conditions for implementation emerges from the findings of this study, which are presented as theoretically transferable propositions that have international relevance. Trial registration ISRCTN18046709 - Peri-operative Implementation Study Evaluation (POISE

    Simplified R-Symmetry Breaking and Low-Scale Gauge Mediation

    Full text link
    We argue that some of the difficulties in constructing realistic models of low-scale gauge mediation are artifacts of the narrow set of models that have been studied. In particular, much attention has been payed to the scenario in which the Goldstino superfield in an O'Raifeartaigh model is responsible for both supersymmetry breaking and R-symmetry breaking. In such models, the competing problems of generating sufficiently massive gauginos while preserving an acceptably light gravitino can be quite challenging. We show that by sharing the burdens of breaking supersymmetry and R-symmetry with a second field, these problems are easily solved even within the O'Raifeartaigh framework. We present explicit models realizing minimal gauge mediation with a gravitino mass in the eV range that are both calculable and falsifiable.Comment: 31 pages, 4 figures, references added, minor change

    Symbols in engineering drawings (SiED): an imbalanced dataset benchmarked by convolutional neural networks.

    Get PDF
    Engineering drawings are common across different domains such as Oil & Gas, construction, mechanical and other domains. Automatic processing and analysis of these drawings is a challenging task. This is partly due to the complexity of these documents and also due to the lack of dataset availability in the public domain that can help push the research in this area. In this paper, we present a multiclass imbalanced dataset for the research community made of 2432 instances of engineering symbols. These symbols were extracted from a collection of complex engineering drawings known as Piping and Instrumentation Diagram (P&ID). By providing such dataset to the research community, we anticipate that this will help attract more attention to an important, yet overlooked industrial problem, and will also advance the research in such important and timely topics. We discuss the datasets characteristics in details, and we also show how Convolutional Neural Networks (CNNs) perform on such extremely imbalanced datasets. Finally, conclusions and future directions are discussed

    Use-Exposure Relationships of Pesticides for Aquatic Risk Assessment

    Get PDF
    Field-scale environmental models have been widely used in aquatic exposure assessments of pesticides. Those models usually require a large set of input parameters and separate simulations for each pesticide in evaluation. In this study, a simple use-exposure relationship is developed based on regression analysis of stochastic simulation results generated from the Pesticide Root-Zone Model (PRZM). The developed mathematical relationship estimates edge-of-field peak concentrations of pesticides from aerobic soil metabolism half-life (AERO), organic carbon-normalized soil sorption coefficient (KOC), and application rate (RATE). In a case study of California crop scenarios, the relationships explained 90–95% of the variances in the peak concentrations of dissolved pesticides as predicted by PRZM simulations for a 30-year period. KOC was identified as the governing parameter in determining the relative magnitudes of pesticide exposures in a given crop scenario. The results of model application also indicated that the effects of chemical fate processes such as partitioning and degradation on pesticide exposure were similar among crop scenarios, while the cross-scenario variations were mainly associated with the landscape characteristics, such as organic carbon contents and curve numbers. With a minimum set of input data, the use-exposure relationships proposed in this study could be used in screening procedures for potential water quality impacts from the off-site movement of pesticides

    Attention modulates adaptive motor learning in the ‘broken escalator’ paradigm

    Get PDF
    The physical stumble caused by stepping onto a stationary (broken) escalator represents a locomotor aftereffect (LAE) that attests to a process of adaptive motor learning. Whether such learning is primarily explicit (requiring attention resources) or implicit (independent of attention) is unknown. To address this question, we diverted attention in the adaptation (MOVING) and aftereffect (AFTER) phases of the LAE by loading these phases with a secondary cognitive task (sequential naming of a vegetable, fruit and a colour). Thirty-six healthy adults were randomly assigned to 3 equally sized groups. They performed 5 trials stepping onto a stationary sled (BEFORE), 5 with the sled moving (MOVING) and 5 with the sled stationary again (AFTER). A 'Dual-Task-MOVING (DTM)' group performed the dual-task in the MOVING phase and the 'Dual-Task-AFTEREFFECT (DTAE)' group in the AFTER phase. The 'control' group performed no dual task. We recorded trunk displacement, gait velocity and gastrocnemius muscle EMG of the left (leading) leg. The DTM, but not the DTAE group, had larger trunk displacement during the MOVING phase, and a smaller trunk displacement aftereffect compared with controls. Gait velocity was unaffected by the secondary cognitive task in either group. Thus, adaptive locomotor learning involves explicit learning, whereas the expression of the aftereffect is automatic (implicit). During rehabilitation, patients should be actively encouraged to maintain maximal attention when learning new or challenging locomotor tasks

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Different SO(10) Paths to Fermion Masses and Mixings

    Get PDF
    Recently SO(10) models with type-II see-saw dominance have been proposed as a promising framework for obtaining Grand Unification theories with approximate Tri-bimaximal (TB) mixing in the neutrino sector. We make a general study of SO(10) models with type-II see-saw dominance and show that an excellent fit can be obtained for fermion masses and mixings, also including the neutrino sector. To make this statement more significant we compare the performance of type-II see-saw dominance models in fitting the fermion masses and mixings with more conventional models which have no built-in TB mixing in the neutrino sector. For a fair comparison the same input data and fitting procedure is adopted for all different theories. We find that the type-II dominance models lead to an excellent fit, comparable with the best among the available models, but the tight structure of this framework implies a significantly larger amount of fine tuning with respect to other approaches.Comment: 24 pages, References and minor wording changes adde
    corecore