2,424 research outputs found

    From evidence-base to practice: implementation of the Nurse Family Partnership programme in England

    Get PDF
    The aims of this article are to highlight the issues that are relevant to the implementation of a rigorously evidence-based programme of support, the Nurse Family Partnership programme, into a national system of care. Methods used are semi-structured interviews with families in receipt of the programme in the first 10 sites, with the nursing staff, with members of the central team guiding the initiative and with other professionals. Analyses of data collected during programme delivery evaluate fidelity of delivery. The results indicate that the programme is perceived in a positive light and take-up is high, with delivery close to the stated US objectives. Issues pertaining to sustainability are highlighted - in particular, local concerns about cost set against long-term rather than immediate gains. However, local investment is predominantly strong, with creative methods being planned for the future. Overall, the study shows that within an NHS system of care it is possible to deliver a targeted evidence-based programme

    Electronic transport in polycrystalline graphene

    Full text link
    Most materials in available macroscopic quantities are polycrystalline. Graphene, a recently discovered two-dimensional form of carbon with strong potential for replacing silicon in future electronics, is no exception. There is growing evidence of the polycrystalline nature of graphene samples obtained using various techniques. Grain boundaries, intrinsic topological defects of polycrystalline materials, are expected to dramatically alter the electronic transport in graphene. Here, we develop a theory of charge carrier transmission through grain boundaries composed of a periodic array of dislocations in graphene based on the momentum conservation principle. Depending on the grain boundary structure we find two distinct transport behaviours - either high transparency, or perfect reflection of charge carriers over remarkably large energy ranges. First-principles quantum transport calculations are used to verify and further investigate this striking behaviour. Our study sheds light on the transport properties of large-area graphene samples. Furthermore, purposeful engineering of periodic grain boundaries with tunable transport gaps would allow for controlling charge currents without the need of introducing bulk band gaps in otherwise semimetallic graphene. The proposed approach can be regarded as a means towards building practical graphene electronics.Comment: accepted in Nature Material

    Single Gene Deletions of Orexin, Leptin, Neuropeptide Y, and Ghrelin Do Not Appreciably Alter Food Anticipatory Activity in Mice

    Get PDF
    Timing activity to match resource availability is a widely conserved ability in nature. Scheduled feeding of a limited amount of food induces increased activity prior to feeding time in animals as diverse as fish and rodents. Typically, food anticipatory activity (FAA) involves temporally restricting unlimited food access (RF) to several hours in the middle of the light cycle, which is a time of day when rodents are not normally active. We compared this model to calorie restriction (CR), giving the mice 60% of their normal daily calorie intake at the same time each day. Measurement of body temperature and home cage behaviors suggests that the RF and CR models are very similar but CR has the advantage of a clearly defined food intake and more stable mean body temperature. Using the CR model, we then attempted to verify the published result that orexin deletion diminishes food anticipatory activity (FAA) but observed little to no diminution in the response to CR and, surprisingly, that orexin KO mice are refractory to body weight loss on a CR diet. Next we tested the orexigenic neuropeptide Y (NPY) and ghrelin and the anorexigenic hormone, leptin, using mouse mutants. NPY deletion did not alter the behavior or physiological response to CR. Leptin deletion impaired FAA in terms of some activity measures, such as walking and rearing, but did not substantially diminish hanging behavior preceding feeding time, suggesting that leptin knockout mice do anticipate daily meal time but do not manifest the full spectrum of activities that typify FAA. Ghrelin knockout mice do not have impaired FAA on a CR diet. Collectively, these results suggest that the individual hormones and neuropepetides tested do not regulate FAA by acting individually but this does not rule out the possibility of their concerted action in mediating FAA

    Association between proton pump inhibitor therapy and clostridium difficile infection: a contemporary systematic review and meta-analysis.

    Get PDF
    Abstract Introduction Emerging epidemiological evidence suggests that proton pump inhibitor (PPI) acid-suppression therapy is associated with an increased risk of Clostridium difficile infection (CDI). Methods Ovid MEDLINE, EMBASE, ISI Web of Science, and Scopus were searched from 1990 to January 2012 for analytical studies that reported an adjusted effect estimate of the association between PPI use and CDI. We performed random-effect meta-analyses. We used the GRADE framework to interpret the findings. Results We identified 47 eligible citations (37 case-control and 14 cohort studies) with corresponding 51 effect estimates. The pooled OR was 1.65, 95% CI (1.47, 1.85), I2 = 89.9%, with evidence of publication bias suggested by a contour funnel plot. A novel regression based method was used to adjust for publication bias and resulted in an adjusted pooled OR of 1.51 (95% CI, 1.26–1.83). In a speculative analysis that assumes that this association is based on causality, and based on published baseline CDI incidence, the risk of CDI would be very low in the general population taking PPIs with an estimated NNH of 3925 at 1 year. Conclusions In this rigorously conducted systemic review and meta-analysis, we found very low quality evidence (GRADE class) for an association between PPI use and CDI that does not support a cause-effect relationship

    Planck intermediate results. XLI. A map of lensing-induced B-modes

    Get PDF
    The secondary cosmic microwave background (CMB) BB-modes stem from the post-decoupling distortion of the polarization EE-modes due to the gravitational lensing effect of large-scale structures. These lensing-induced BB-modes constitute both a valuable probe of the dark matter distribution and an important contaminant for the extraction of the primary CMB BB-modes from inflation. Planck provides accurate nearly all-sky measurements of both the polarization EE-modes and the integrated mass distribution via the reconstruction of the CMB lensing potential. By combining these two data products, we have produced an all-sky template map of the lensing-induced BB-modes using a real-space algorithm that minimizes the impact of sky masks. The cross-correlation of this template with an observed (primordial and secondary) BB-mode map can be used to measure the lensing BB-mode power spectrum at multipoles up to 20002000. In particular, when cross-correlating with the BB-mode contribution directly derived from the Planck polarization maps, we obtain lensing-induced BB-mode power spectrum measurement at a significance level of 12σ12\,\sigma, which agrees with the theoretical expectation derived from the Planck best-fit Λ\LambdaCDM model. This unique nearly all-sky secondary BB-mode template, which includes the lensing-induced information from intermediate to small (10100010\lesssim \ell\lesssim 1000) angular scales, is delivered as part of the Planck 2015 public data release. It will be particularly useful for experiments searching for primordial BB-modes, such as BICEP2/Keck Array or LiteBIRD, since it will enable an estimate to be made of the lensing-induced contribution to the measured total CMB BB-modes.Comment: 20 pages, 12 figures; Accepted for publication in A&A; The B-mode map is part of the PR2-2015 Cosmology Products; available as Lensing Products in the Planck Legacy Archive http://pla.esac.esa.int/pla/#cosmology; and described in the 'Explanatory Supplement' https://wiki.cosmos.esa.int/planckpla2015/index.php/Specially_processed_maps#2015_Lensing-induced_B-mode_ma

    Systematic review and meta-analysis of the diagnostic accuracy of ultrasonography for deep vein thrombosis

    Get PDF
    Background Ultrasound (US) has largely replaced contrast venography as the definitive diagnostic test for deep vein thrombosis (DVT). We aimed to derive a definitive estimate of the diagnostic accuracy of US for clinically suspected DVT and identify study-level factors that might predict accuracy. Methods We undertook a systematic review, meta-analysis and meta-regression of diagnostic cohort studies that compared US to contrast venography in patients with suspected DVT. We searched Medline, EMBASE, CINAHL, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Controlled Trials Register, Database of Reviews of Effectiveness, the ACP Journal Club, and citation lists (1966 to April 2004). Random effects meta-analysis was used to derive pooled estimates of sensitivity and specificity. Random effects meta-regression was used to identify study-level covariates that predicted diagnostic performance. Results We identified 100 cohorts comparing US to venography in patients with suspected DVT. Overall sensitivity for proximal DVT (95% confidence interval) was 94.2% (93.2 to 95.0), for distal DVT was 63.5% (59.8 to 67.0), and specificity was 93.8% (93.1 to 94.4). Duplex US had pooled sensitivity of 96.5% (95.1 to 97.6) for proximal DVT, 71.2% (64.6 to 77.2) for distal DVT and specificity of 94.0% (92.8 to 95.1). Triplex US had pooled sensitivity of 96.4% (94.4 to 97.1%) for proximal DVT, 75.2% (67.7 to 81.6) for distal DVT and specificity of 94.3% (92.5 to 95.8). Compression US alone had pooled sensitivity of 93.8 % (92.0 to 95.3%) for proximal DVT, 56.8% (49.0 to 66.4) for distal DVT and specificity of 97.8% (97.0 to 98.4). Sensitivity was higher in more recently published studies and in cohorts with higher prevalence of DVT and more proximal DVT, and was lower in cohorts that reported interpretation by a radiologist. Specificity was higher in cohorts that excluded patients with previous DVT. No studies were identified that compared repeat US to venography in all patients. Repeat US appears to have a positive yield of 1.3%, with 89% of these being confirmed by venography. Conclusion Combined colour-doppler US techniques have optimal sensitivity, while compression US has optimal specificity for DVT. However, all estimates are subject to substantial unexplained heterogeneity. The role of repeat scanning is very uncertain and based upon limited data

    How do we create, and improve, the evidence base? 

    Get PDF
    Providing best clinical care involves using the best available evidence of effectiveness to inform treatment decisions. Producing this evidence begins with trials and continues through synthesis of their findings towards evidence incorporation within comprehensible, usable guidelines, for clinicians and patients at the point of care. However, there is enormous wastage in this evidence production process, with less than 50% of the published biomedical literature considered sufficient in conduct and reporting to be fit for purpose. Over the last 30 years, independent collaborative initiatives have evolved to optimise the evidence to improve patient care. These collaborations each recommend how to improve research quality in a small way at many different stages of the evidence production and distillation process. When we consider these minimal improvements at each stage from an 'aggregation of marginal gains' perspective, the accumulation of small enhancements aggregates, thereby greatly improving the final product of 'best available evidence'. The myriad of tools to reduce research quality leakage and evidence loss should be routinely used by all those with responsibility for ensuring that research benefits patients, that is, those who pay for research (funders), produce it (researchers), take part in it (patients/participants) and use it (clinicians, policy makers and service commissioners)

    Using informative behavior to increase engagement while learning from human reward

    Get PDF
    In this work, we address a relatively unexplored aspect of designing agents that learn from human reward. We investigate how an agent’s non-task behavior can affect a human trainer’s training and agent learning. We use the TAMER framework, which facilitates the training of agents by human-generated reward signals, i.e., judgements of the quality of the agent’s actions, as the foundation for our investigation. Then, starting from the premise that the interaction between the agent and the trainer should be bi-directional, we propose two new training interfaces to increase a human trainer’s active involvement in the training process and thereby improve the agent’s task performance. One provides information on the agent’s uncertainty which is a metric calculated as data coverage, the other on its performance. Our results from a 51-subject user study show that these interfaces can induce the trainers to train longer and give more feedback. The agent’s performance, however, increases only in response to the addition of performance-oriented information, not by sharing uncertainty levels. These results suggest that the organizational maxim about human behavior, “you get what you measure”—i.e., sharing metrics with people causes them to focus on optimizing those metrics while de-emphasizing other objectives—also applies to the training of agents. Using principle component analysis, we show how trainers in the two conditions train agents differently. In addition, by simulating the influence of the agent’s uncertainty–informative behavior on a human’s training behavior, we show that trainers could be distracted by the agent sharing its uncertainty levels about its actions, giving poor feedback for the sake of reducing the agent’s uncertainty without improving the agent’s performance
    corecore