211 research outputs found

    The “ebb and flow” of student learning on placement

    Get PDF
    There is a rise in interest in work based learning as part of student choice at subject level in the UK (DOE 2017) but there remains an absence of specific guidance on how to best support higher education students learning on placement. An alternative HE experience in England, the degree apprenticeship, underlies the continued focus by policy in securing placement experiences for students without stipulating the type of support that is required at the ‘coal face’ of work based learning. Policy documents (UUK 2016), that urge universities to enter into partnership agreements with both employers and FE colleges to plug skills shortages, are noticeably lacking in their appreciation of the unique qualities of work based learning and how best to support students in this setting (Morley 2017a). Unfortunately, this is not unusual as placements have predominantly been an enriching ‘add on’ to the real business of academic learning in more traditional university programmes. Support initiatives, such as that described in chapter 9, are a rare appreciation of the importance of this role. Undergraduate nursing programmes currently support a 50:50 split between practice learning in clinical placements and the theory delivered at universities. Vocational degrees, such as this, provide an interesting case study as to how students can be supported in the practice environment by an appreciation of how students really learn on placement and how hidden resources can be utilised more explicitly for practice learning. During 2013 – 2015 a professional doctorate research study (Morley 2015) conducted a grounded theory study of 21 first year student nurses on their first placement to discover how they learnt ‘at work’ and the strategies they enlisted to be successful work based learners

    An experimental study of the intrinsic stability of random forest variable importance measures

    Get PDF
    BACKGROUND: The stability of Variable Importance Measures (VIMs) based on random forest has recently received increased attention. Despite the extensive attention on traditional stability of data perturbations or parameter variations, few studies include influences coming from the intrinsic randomness in generating VIMs, i.e. bagging, randomization and permutation. To address these influences, in this paper we introduce a new concept of intrinsic stability of VIMs, which is defined as the self-consistence among feature rankings in repeated runs of VIMs without data perturbations and parameter variations. Two widely used VIMs, i.e., Mean Decrease Accuracy (MDA) and Mean Decrease Gini (MDG) are comprehensively investigated. The motivation of this study is two-fold. First, we empirically verify the prevalence of intrinsic stability of VIMs over many real-world datasets to highlight that the instability of VIMs does not originate exclusively from data perturbations or parameter variations, but also stems from the intrinsic randomness of VIMs. Second, through Spearman and Pearson tests we comprehensively investigate how different factors influence the intrinsic stability. RESULTS: The experiments are carried out on 19 benchmark datasets with diverse characteristics, including 10 high-dimensional and small-sample gene expression datasets. Experimental results demonstrate the prevalence of intrinsic stability of VIMs. Spearman and Pearson tests on the correlations between intrinsic stability and different factors show that #feature (number of features) and #sample (size of sample) have a coupling effect on the intrinsic stability. The synthetic indictor, #feature/#sample, shows both negative monotonic correlation and negative linear correlation with the intrinsic stability, while OOB accuracy has monotonic correlations with intrinsic stability. This indicates that high-dimensional, small-sample and high complexity datasets may suffer more from intrinsic instability of VIMs. Furthermore, with respect to parameter settings of random forest, a large number of trees is preferred. No significant correlations can be seen between intrinsic stability and other factors. Finally, the magnitude of intrinsic stability is always smaller than that of traditional stability. CONCLUSION: First, the prevalence of intrinsic stability of VIMs demonstrates that the instability of VIMs not only comes from data perturbations or parameter variations, but also stems from the intrinsic randomness of VIMs. This finding gives a better understanding of VIM stability, and may help reduce the instability of VIMs. Second, by investigating the potential factors of intrinsic stability, users would be more aware of the risks and hence more careful when using VIMs, especially on high-dimensional, small-sample and high complexity datasets

    The search for transient astrophysical neutrino emission with IceCube-DeepCore

    Get PDF
    We present the results of a search for astrophysical sources of brief transient neutrino emission using IceCube and DeepCore data acquired between 2012 May 15 and 2013 April 30. While the search methods employed in this analysis are similar to those used in previous IceCube point source searches, the data set being examined consists of a sample of predominantly sub-TeV muon-neutrinos from the Northern Sky (-5 degrees < delta < 90 degrees) obtained through a novel event selection method. This search represents a first attempt by IceCube to identify astrophysical neutrino sources in this relatively unexplored energy range. The reconstructed direction and time of arrival of neutrino events are used to search for any significant self-correlation in the data set. The data revealed no significant source of transient neutrino emission. This result has been used to construct limits at timescales ranging from roughly 1 s to 10 days for generic soft-spectra transients. We also present limits on a specific model of neutrino emission from soft jets in core-collapse supernovae

    Targeted sequencing of lung function loci in chronic obstructive pulmonary disease cases and controls

    Get PDF
    Chronic obstructive pulmonary disease (COPD) is the third leading cause of death worldwide; smoking is the main risk factor for COPD, but genetic factors are also relevant contributors. Genome-wide association studies (GWAS) of the lung function measures used in the diagnosis of COPD have identified a number of loci, however association signals are often broad and collectively these loci only explain a small proportion of the heritability. In order to examine the association with COPD risk of genetic variants down to low allele frequencies, to aid fine-mapping of association signals and to explain more of the missing heritability, we undertook a targeted sequencing study in 300 COPD cases and 300 smoking controls for 26 loci previously reported to be associated with lung function. We used a pooled sequencing approach, with 12 pools of 25 individuals each, enabling high depth (30x) coverage per sample to be achieved. This pooled design maximised sample size and therefore power, but led to challenges during variant-calling since sequencing error rates and minor allele frequencies for rare variants can be very similar. For this reason we employed a rigorous quality control pipeline for variant detection which included the use of 3 independent calling algorithms. In order to avoid false positive associations we also developed tests to detect variants with potential batch effects and removed them before undertaking association testing. We tested for the effects of single variants and the combined effect of rare variants within a locus. We followed up the top signals with data available (only 67% of collapsing methods signals) in 4,249 COPD cases and 11,916 smoking controls from UK Biobank. We provide suggestive evidence for the combined effect of rare variants on COPD risk in TNXB and in sliding windows within MECOM and upstream of HHIP. These findings can lead to an improved understanding of the molecular pathways involved in the development of COPD

    Soluble CD36 Ectodomain Binds Negatively Charged Diacylglycerol Ligands and Acts as a Co-Receptor for TLR2

    Get PDF
    BACKGROUND:Cluster of differentiation 36 (CD36) is a transmembrane glycoprotein involved in many biological processes, such as platelet biology, angiogenesis and in the aetiopathology of atherosclerosis and cardiovascular diseases. Toll-like receptors (TLRs) are one of the most important receptors of the innate immune system. Their main function is the recognition of conserved structure of microorganisms. This recognition triggers signaling pathways that activate transcription of cytokines and co-stimulatory molecules which participate in the generation of an immune response against microbes. In particular, TLR2 has been shown to recognize a broad range of ligands. Recently, we showed that CD36 serves as a co-receptor for TLR2 and enhances recognition of specific diacylglycerides derived from bacteria. METHODOLOGY/ PRINCIPAL FINDINGS:Here, we investigate the mechanism by which CD36 contributes to ligand recognition and activation of TLR2 signaling pathway. We show that the ectodomain of murine CD36 (mCD36ED) directly interacts with negatively charged diacylglycerol ligands, which explains the specificity and selectivity of CD36 as a TLR2 co-receptor. We also show that mCD36ED amplifies the pro-inflammatory response to lipoteichoic acid in macrophages of wild-type mice and restores the pro-inflammatory response of macrophages from mice deficient in CD36 (oblivious), but not from mice deficient in cluster of differentiation 14 (CD14) (heedless). CONCLUSION/ SIGNIFICANCE: These data indicate that the CD36 ectodomain is the only relevant domain for activation of TLR2 signaling pathway and that CD36 and CD14 have a non-redundant role for loading ligands onto TLR2 in the plasma-membrane. The pro-inflammatory role of soluble CD36 can be relevant in the activation of the immune response against pathogens, as well as in the progression of chronic diseases. Therefore, an increased level of soluble forms of CD36, which has been reported to be increased in type II diabetic patients, could accelerate atherosclerosis by increasing the pro-inflammatory response to diacylglycerol ligands

    Zinc Coordination Is Required for and Regulates Transcription Activation by Epstein-Barr Nuclear Antigen 1

    Get PDF
    Epstein-Barr Nuclear Antigen 1 (EBNA1) is essential for Epstein-Barr virus to immortalize naïve B-cells. Upon binding a cluster of 20 cognate binding-sites termed the family of repeats, EBNA1 transactivates promoters for EBV genes that are required for immortalization. A small domain, termed UR1, that is 25 amino-acids in length, has been identified previously as essential for EBNA1 to activate transcription. In this study, we have elucidated how UR1 contributes to EBNA1's ability to transactivate. We show that zinc is necessary for EBNA1 to activate transcription, and that UR1 coordinates zinc through a pair of essential cysteines contained within it. UR1 dimerizes upon coordinating zinc, indicating that EBNA1 contains a second dimerization interface in its amino-terminus. There is a strong correlation between UR1-mediated dimerization and EBNA1's ability to transactivate cooperatively. Point mutants of EBNA1 that disrupt zinc coordination also prevent self-association, and do not activate transcription cooperatively. Further, we demonstrate that UR1 acts as a molecular sensor that regulates the ability of EBNA1 to activate transcription in response to changes in redox and oxygen partial pressure (pO2). Mild oxidative stress mimicking such environmental changes decreases EBNA1-dependent transcription in a lymphoblastoid cell-line. Coincident with a reduction in EBNA1-dependent transcription, reductions are observed in EBNA2 and LMP1 protein levels. Although these changes do not affect LCL survival, treated cells accumulate in G0/G1. These findings are discussed in the context of EBV latency in body compartments that differ strikingly in their pO2 and redox potential

    Trace elements in hemodialysis patients: a systematic review and meta-analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Hemodialysis patients are at risk for deficiency of essential trace elements and excess of toxic trace elements, both of which can affect health. We conducted a systematic review to summarize existing literature on trace element status in hemodialysis patients.</p> <p>Methods</p> <p>All studies which reported relevant data for chronic hemodialysis patients and a healthy control population were eligible, regardless of language or publication status. We included studies which measured at least one of the following elements in whole blood, serum, or plasma: antimony, arsenic, boron, cadmium, chromium, cobalt, copper, fluorine, iodine, lead, manganese, mercury, molybdenum, nickel, selenium, tellurium, thallium, vanadium, and zinc. We calculated differences between hemodialysis patients and controls using the differences in mean trace element level, divided by the pooled standard deviation.</p> <p>Results</p> <p>We identified 128 eligible studies. Available data suggested that levels of cadmium, chromium, copper, lead, and vanadium were higher and that levels of selenium, zinc and manganese were lower in hemodialysis patients, compared with controls. Pooled standard mean differences exceeded 0.8 standard deviation units (a large difference) higher than controls for cadmium, chromium, vanadium, and lower than controls for selenium, zinc, and manganese. No studies reported data on antimony, iodine, tellurium, and thallium concentrations.</p> <p>Conclusion</p> <p>Average blood levels of biologically important trace elements were substantially different in hemodialysis patients, compared with healthy controls. Since both deficiency and excess of trace elements are potentially harmful yet amenable to therapy, the hypothesis that trace element status influences the risk of adverse clinical outcomes is worthy of investigation.</p
    corecore