169 research outputs found

    DNA looping by two-site restriction endonucleases: heterogeneous probability distributions for loop size and unbinding force

    Get PDF
    Proteins interacting at multiple sites on DNA via looping play an important role in many fundamental biochemical processes. Restriction endonucleases that must bind at two recognition sites for efficient activity are a useful model system for studying such interactions. Here we used single DNA manipulation to study sixteen known or suspected two-site endonucleases. In eleven cases (BpmI, BsgI, BspMI, Cfr10I, Eco57I, EcoRII, FokI, HpaII, NarI, Sau3AI and SgrAI) we found that substitution of Ca(2+) for Mg(2+) blocked cleavage and enabled us to observe stable DNA looping. Forced disruption of these loops allowed us to measure the frequency of looping and probability distributions for loop size and unbinding force for each enzyme. In four cases we observed bimodal unbinding force distributions, indicating conformational heterogeneity and/or complex binding energy landscapes. Measured unlooping events ranged in size from 7 to 7500 bp and the most probable size ranged from less than 75 bp to nearly 500 bp, depending on the enzyme. In most cases the size distributions were in much closer agreement with theoretical models that postulate sharp DNA kinking than with classical models of DNA elasticity. Our findings indicate that DNA looping is highly variable depending on the specific protein and does not depend solely on the mechanical properties of DNA

    A narrative review of adaptive testing and its application to medical education.

    Get PDF
    Adaptive testing has a long but largely unrecognized history. The advent of computer-based testing has created new opportunities to incorporate adaptive testing into conventional programmes of study. Relatively recently software has been developed that can automate the delivery of summative assessments that adapt by difficulty or content. Both types of adaptive testing require a large item bank that has been suitably quality assured. Adaptive testing by difficulty enables more reliable evaluation of individual candidate performance, although at the expense of transparency in decision making, and requiring unidirectional navigation. Adaptive testing by content enables reduction in compensation and targeted individual support to enable assurance of performance in all the required outcomes, although at the expense of discovery learning. With both types of adaptive testing, candidates are presented a different set of items to each other, and there is the potential for that to be perceived as unfair. However, when candidates of different abilities receive the same items, they may receive too many they can answer with ease, or too many that are too difficult to answer. Both situations may be considered unfair as neither provides the opportunity to demonstrate what they know. Adapting by difficulty addresses this. Similarly, when everyone is presented with the same items, but answer different items incorrectly, not providing individualized support and opportunity to demonstrate performance in all the required outcomes by revisiting content previously answered incorrectly could also be considered unfair; a point addressed when adapting by content. We review the educational rationale behind the evolution of adaptive testing and consider its inherent strengths and limitations. We explore the continuous pursuit of improvement of examination methodology and how software can facilitate personalized assessment. We highlight how this can serve as a catalyst for learning and refinement of curricula; fostering engagement of learner and educator alike

    A general method for manipulating DNA sequences from any organism with optical tweezers

    Get PDF
    Mechanical manipulation of single DNA molecules can provide novel information about DNA properties and protein–DNA interactions. Here we describe and characterize a useful method for manipulating desired DNA sequences from any organism with optical tweezers. Molecules are produced from either genomic or cloned DNA by PCR using labeled primers and are tethered between two optically trapped microspheres. We demonstrate that human, insect, plant, bacterial and viral sequences ranging from ∼10 to 40 kilobasepairs can be manipulated. Force-extension measurements show that these constructs exhibit uniform elastic properties in accord with the expected contour lengths for the targeted sequences. Detailed protocols for preparing and manipulating these molecules are presented, and tethering efficiency is characterized as a function of DNA concentration, ionic strength and pH. Attachment strength is characterized by measuring the unbinding time as a function of applied force. An alternative stronger attachment method using an amino–carboxyl linkage, which allows for reliable DNA overstretching, is also described

    Multi-institutional evaluation of a Pareto navigation guided automated radiotherapy planning solution for prostate cancer

    Get PDF
    \ua9 The Author(s) 2024.Background: Current automated planning solutions are calibrated using trial and error or machine learning on historical datasets. Neither method allows for the intuitive exploration of differing trade-off options during calibration, which may aid in ensuring automated solutions align with clinical preference. Pareto navigation provides this functionality and offers a potential calibration alternative. The purpose of this study was to validate an automated radiotherapy planning solution with a novel multi-dimensional Pareto navigation calibration interface across two external institutions for prostate cancer. Methods: The implemented ‘Pareto Guided Automated Planning’ (PGAP) methodology was developed in RayStation using scripting and consisted of a Pareto navigation calibration interface built upon a ‘Protocol Based Automatic Iterative Optimisation’ planning framework. 30 previous patients were randomly selected by each institution (IA and IB), 10 for calibration and 20 for validation. Utilising the Pareto navigation interface automated protocols were calibrated to the institutions’ clinical preferences. A single automated plan (VMATAuto) was generated for each validation patient with plan quality compared against the previously treated clinical plan (VMATClinical) both quantitatively, using a range of DVH metrics, and qualitatively through blind review at the external institution. Results: PGAP led to marked improvements across the majority of rectal dose metrics, with Dmean reduced by 3.7 Gy and 1.8 Gy for IA and IB respectively (p < 0.001). For bladder, results were mixed with low and intermediate dose metrics reduced for IB but increased for IA. Differences, whilst statistically significant (p < 0.05) were small and not considered clinically relevant. The reduction in rectum dose was not at the expense of PTV coverage (D98% was generally improved with VMATAuto), but was somewhat detrimental to PTV conformality. The prioritisation of rectum over conformality was however aligned with preferences expressed during calibration and was a key driver in both institutions demonstrating a clear preference towards VMATAuto, with 31/40 considered superior to VMATClinical upon blind review. Conclusions: PGAP enabled intuitive adaptation of automated protocols to an institution’s planning aims and yielded plans more congruent with the institution’s clinical preference than the locally produced manual clinical plans

    Rainfall intensity and catchment size control storm runoff in a gullied blanket peatland

    Get PDF
    Upland blanket peat is widespread in the headwaters of UK catchments, but much of it has been degraded through atmospheric pollution, vegetation change and erosion. Runoff generation in these headwaters is an important element of downstream flood risk and these areas are increasingly the focus of interventions to restore the peat ecosystem and to potentially mitigate downstream flooding. Here we use a series of multivariate analysis techniques to examine controls on storm runoff behavior within and between ten blanket peat catchments all within 5 km of one another and ranging in size from 0.2 to 3.9 ha. We find that: 1) for all 10 catchments, rainfall intensity is the dominant driver for both magnitude and timing of peak discharge, and that total and antecedent rainfall is important for peak discharge only in small storms; 2) there is considerable inter-catchment variability in: runoff coefficient, lag time, peak runoff, and their predictability from rainfall; however, 3) a significant fraction of the inter-catchment variability can be explained by catchment characteristics, particularly catchment area; and 4) catchment controls on peak discharge and runoff coefficient for small storms highlight the importance of storage and connectivity while those for large events suggest that surface flow attenuation dominates. Together these results suggest a switching rainfall-runoff behavior where catchment storage, connectivity and antecedent conditions control small discharge peaks but become increasingly irrelevant for larger storms. Our results suggest that, in the context of Natural Flood Management potential, expanding depression storage (e.g. distributed shallow water pools) in addition to existing restoration methods could increase the range of storms within which connectivity and storage remain important and that for larger storms measures which target surface runoff velocities are likely to be important

    Wood machining with a focus on French research in the last 50 years

    Full text link

    Traitement des staphylococcies cutanées et de l’eczéma du Chien par la pénicillinothérapie seule ou associée à l’anatoxithérapie spécifique

    No full text
    Richou Rémy, Millin J. Traitement des staphylococcies cutanées et de l’eczéma du Chien par la pénicillinothérapie seule ou associée à l’anatoxithérapie spécifique. In: Bulletin de l'Académie Vétérinaire de France tome 100 n°6, 1947. pp. 247-250

    Unpacking the efficacy of Reading to Learn using Cognitive Load Theory

    No full text
    This paper synthesises the key findings of past separate studies conducted by the same authors, which sought to assess the efficacy of the Reading to Learn (RtL) literacy intervention on students' academic writing performance. Both studies of RtL were implemented in response to growing concerns about the academic under-preparedness of undergraduate students at universities across South Africa. The first study aimed to support mostly first-generation, first-year English Additional Language (EAL) learners in their transition to higher education. The second study aimed to support EAL students' academic writing development at a senior secondary school level prior to the school-to-university transition. In both studies, the cohorts of students examined originated from low socioeconomic communities, where linguistic marginalisation arguably imposes significant barriers to successful university completion. The novel contribution of this paper is to use a Cognitive Load Theoretical lens to explicate why RtL might improve the academic writing skills of under-prepared students making the transition to university
    corecore