116 research outputs found

    Enabling the transfer of skills and knowledge across classroom and work contexts

    Get PDF
    Increasingly, contemporary work means graduates will operate in multiple workplace settings during their careers, catalysing the need for successful transfer of capabilities across diverse contexts. The transfer of skills and knowledge, however, is a complex area of learning theory which is often assumed and lacks empirical analysis. Facilitating transfer is critical for preparing students for effective transition to the workplace. Work Integrated Learning (WIL) provides an opportunity for tertiary education students to ‘practice’ transfer across classroom and work settings. Building on existing scholarship and using a mixed-methods design, this study aimed to explore the nature of transfer across these contexts during WIL, influencing factors and WIL design principles that optimise transfer. Survey data were collected from WIL students (N = 151) and interview data from WIL industry supervisors (N = 24) across different disciplines/professions in three universities (Australia and New Zealand). Findings indicate that students practice transfer during WIL, yet it is often during less complex tasks that relate to discipline-specific skills, rather than generic ones. WIL thus augments transfer, yet certain program and workplace characteristics enhance student confidence and capabilities in this process, highlighting the need for careful curricula design. Findings also highlighted the important role of paid work and volunteering and emphasise the importance of educators taking a holistic approach to developing students’ transfer ability, drawing on practical and authentic learning in curricular, co-curricular and extra-curricular activities, particularly those that engage industry. Implications for stakeholders are discussed, and strategies identified to enhance skills and knowledge transfer from classrooms to the workplace

    Certified Computation from Unreliable Datasets

    Full text link
    A wide range of learning tasks require human input in labeling massive data. The collected data though are usually low quality and contain inaccuracies and errors. As a result, modern science and business face the problem of learning from unreliable data sets. In this work, we provide a generic approach that is based on \textit{verification} of only few records of the data set to guarantee high quality learning outcomes for various optimization objectives. Our method, identifies small sets of critical records and verifies their validity. We show that many problems only need poly(1/ε)\text{poly}(1/\varepsilon) verifications, to ensure that the output of the computation is at most a factor of (1±ε)(1 \pm \varepsilon) away from the truth. For any given instance, we provide an \textit{instance optimal} solution that verifies the minimum possible number of records to approximately certify correctness. Then using this instance optimal formulation of the problem we prove our main result: "every function that satisfies some Lipschitz continuity condition can be certified with a small number of verifications". We show that the required Lipschitz continuity condition is satisfied even by some NP-complete problems, which illustrates the generality and importance of this theorem. In case this certification step fails, an invalid record will be identified. Removing these records and repeating until success, guarantees that the result will be accurate and will depend only on the verified records. Surprisingly, as we show, for several computation tasks more efficient methods are possible. These methods always guarantee that the produced result is not affected by the invalid records, since any invalid record that affects the output will be detected and verified

    Talking and thinking together at Key Stage 1

    Get PDF
    In this paper, we describe an innovative approach to promoting effective classroom-based groupwork and the development of children's speaking and listening at Key Stage 1. This approach, known as Thinking Together, was initially developed for use with Key Stage 2 children. The work reported here explains how this approach has now been applied to the teaching of speaking and listening at Key Stage 1. The approach is founded on contemporary sociocultural theory and research. At the heart of the Thinking Together approach is a concern to help children build and develop their knowledge and understanding together, through enabling them to practise and develop ways of reasoning with language

    A spatio-temporal framework for modelling wastewater concentration during the COVID-19 pandemic

    Get PDF
    The potential utility of wastewater-based epidemiology as an early warning tool has been explored widely across the globe during the current COVID-19 pandemic. Methods to detect the presence of SARS-CoV-2 RNA in wastewater were developed early in the pandemic, and extensive work has been conducted to evaluate the relationship between viral concentration and COVID-19 case numbers at the catchment areas of sewage treatment works (STWs) over time. However, no attempt has been made to develop a model that predicts wastewater concentration at fine spatio-temporal resolutions covering an entire country, a necessary step towards using wastewater monitoring for the early detection of local outbreaks. We consider weekly averages of flow-normalised viral concentration, reported as the number of SARS-CoV-2N1 gene copies per litre (gc/L) of wastewater available at 303 STWs over the period between 1 June 2021 and 30 March 2022. We specify a spatially continuous statistical model that quantifies the relationship between weekly viral concentration and a collection of covariates covering socio-demographics, land cover and virus associated genomic characteristics at STW catchment areas while accounting for spatial and temporal correlation. We evaluate the model’s predictive performance at the catchment level through 10-fold cross-validation. We predict the weekly viral concentration at the population-weighted centroid of the 32,844 lower super output areas (LSOAs) in England, then aggregate these LSOA predictions to the Lower Tier Local Authority level (LTLA), a geography that is more relevant to public health policy-making. We also use the model outputs to quantify the probability of local changes of direction (increases or decreases) in viral concentration over short periods (e.g. two consecutive weeks). The proposed statistical framework can predict SARS-CoV-2 viral concentration in wastewater at high spatio-temporal resolution across England. Additionally, the probabilistic quantification of local changes can be used as an early warning tool for public health surveillance

    Toward Defining the Preclinical Stages of Alzheimer's Disease: Recommendations from the National Institute on Aging-Alzheimer's Association Workgroups on Diagnostic Guidelines for Alzheimer's Disease

    Get PDF
    The pathophysiological process of Alzheimer's disease (AD) is thought to begin many years before the diagnosis of AD dementia. This long "preclinical" phase of AD would provide a critical opportunity for therapeutic intervention; however, we need to further elucidate the link between the pathological cascade of AD and the emergence of clinical symptoms. The National Institute on Aging and the Alzheimer's Association convened an international workgroup to review the biomarker, epidemiological, and neuropsychological evidence, and to develop recommendations to determine the factors which best predict the risk of progression from "normal" cognition to mild cognitive impairment and AD dementia. We propose a conceptual framework and operational research criteria, based on the prevailing scientific evidence to date, to test and refine these models with longitudinal clinical research studies. These recommendations are solely intended for research purposes and do not have any clinical implications at this time. It is hoped that these recommendations will provide a common rubric to advance the study of preclinical AD, and ultimately, aid the field in moving toward earlier intervention at a stage of AD when some disease-modifying therapies may be most efficacious

    clag9 Is Not Essential for PfEMP1 Surface Expression in Non-Cytoadherent Plasmodium falciparum Parasites with a Chromosome 9 Deletion

    Get PDF
    BACKGROUND: The expression of the clonally variant virulence factor PfEMP1 mediates the sequestration of Plasmodium falciparum infected erythrocytes in the host vasculature and contributes to chronic infection. Non-cytoadherent parasites with a chromosome 9 deletion lack clag9, a gene linked to cytoadhesion in previous studies. Here we present new clag9 data that challenge this view and show that surface the non-cytoadherence phenotype is linked to the expression of a non-functional PfEMP1. METHODOLOGY/PRINCIPAL FINDINGS: Loss of adhesion in P. falciparum D10, a parasite line with a large chromosome 9 deletion, was investigated. Surface iodination analysis of non-cytoadherent D10 parasites and COS-7 surface expression of the CD36-binding PfEMP1 CIDR1α domain were performed and showed that these parasites express an unusual trypsin-resistant, non-functional PfEMP1 at the erythrocyte surface. However, the CIDR1α domain of this var gene expressed in COS-7 cells showed strong binding to CD36. Atomic Force Microscopy showed a slightly modified D10 knob morphology compared to adherent parasites. Trafficking of PfEMP1 and KAHRP remained functional in D10. We link the non-cytoadherence phenotype to a chromosome 9 breakage and healing event resulting in the loss of 25 subtelomeric genes including clag9. In contrast to previous studies, knockout of the clag9 gene from 3D7 did not interfere with parasite adhesion to CD36. CONCLUSIONS/SIGNIFICANCE: Our data show the surface expression of non-functional PfEMP1 in D10 strongly indicating that genes other than clag9 deleted from chromosome 9 are involved in this virulence process possibly via post-translational modifications

    Exhaled Aerosol Transmission of Pandemic and Seasonal H1N1 Influenza Viruses in the Ferret

    Get PDF
    Person-to-person transmission of influenza viruses occurs by contact (direct and fomites) and non-contact (droplet and small particle aerosol) routes, but the quantitative dynamics and relative contributions of these routes are incompletely understood. The transmissibility of influenza strains estimated from secondary attack rates in closed human populations is confounded by large variations in population susceptibilities. An experimental method to phenotype strains for transmissibility in an animal model could provide relative efficiencies of transmission. We developed an experimental method to detect exhaled viral aerosol transmission between unanesthetized infected and susceptible ferrets, measured aerosol particle size and number, and quantified the viral genomic RNA in the exhaled aerosol. During brief 3-hour exposures to exhaled viral aerosols in airflow-controlled chambers, three strains of pandemic 2009 H1N1 strains were frequently transmitted to susceptible ferrets. In contrast one seasonal H1N1 strain was not transmitted in spite of higher levels of viral RNA in the exhaled aerosol. Among three pandemic strains, the two strains causing weight loss and illness in the intranasally infected ‘donor’ ferrets were transmitted less efficiently from the donor than the strain causing no detectable illness, suggesting that the mucosal inflammatory response may attenuate viable exhaled virus. Although exhaled viral RNA remained constant, transmission efficiency diminished from day 1 to day 5 after donor infection. Thus, aerosol transmission between ferrets may be dependent on at least four characteristics of virus-host relationships including the level of exhaled virus, infectious particle size, mucosal inflammation, and viral replication efficiency in susceptible mucosa

    Considerations on the use of video playbacks as visual stimuli: The Lisbon workshop consensus

    Get PDF
    This paper is the consensus of a workshop that critically evaluated the utility and problems of video playbacks as stimuli in studies of visual behavior. We suggest that video playback is probably suitable for studying motion, shape, texture, size, and brightness. Studying color is problematic because video systems are specifically designed for humans. Any difference in color perception must lead to a different color sensation in most animals. Another potentially problematic limitation of video images is that they lack depth cues derived from stereopsis, accommodation, and motion parallax. Nonetheless, when used appropriately, video playback allows an unprecedented range of questions in visual communication to be addressed. It is important to note that most of the potential limitations of video playback are not unique to this technique but are relevant to all studies of visual signaling in animals
    • …
    corecore