61 research outputs found

    Delta excitation in K^+-nucleus collisions

    Get PDF
    We present calculations for \Delta excitation in the (K^+,K^+) reaction in nuclei. The background from quasielastic K^+ scattering in the \Delta region is also evaluated and shown to be quite small in some kinematical regions, so as to allow for a clean identification of the \Delta excitation strength. Nuclear effects tied to the \Delta renormalization in the nucleus are considered and the reaction is shown to provide new elements to enrich our knowledge of the \Delta properties in a nuclear medium.Comment: 11 pages, 6 figures, LaTe

    The joint evaluated fission and fusion nuclear data library, JEFF-3.3

    Get PDF
    The joint evaluated fission and fusion nuclear data library 3.3 is described. New evaluations for neutron-induced interactions with the major actinides 235^{235}U, 238^{238}U and 239^{239}Pu, on 241^{241}Am and 23^{23}Na, 59^{59}Ni, Cr, Cu, Zr, Cd, Hf, W, Au, Pb and Bi are presented. It includes new fission yields, prompt fission neutron spectra and average number of neutrons per fission. In addition, new data for radioactive decay, thermal neutron scattering, gamma-ray emission, neutron activation, delayed neutrons and displacement damage are presented. JEFF-3.3 was complemented by files from the TENDL project. The libraries for photon, proton, deuteron, triton, helion and alpha-particle induced reactions are from TENDL-2017. The demands for uncertainty quantification in modeling led to many new covariance data for the evaluations. A comparison between results from model calculations using the JEFF-3.3 library and those from benchmark experiments for criticality, delayed neutron yields, shielding and decay heat, reveals that JEFF-3.3 performes very well for a wide range of nuclear technology applications, in particular nuclear energy

    The practice of 'doing' evaluation: Lessons learned from nine complex intervention trials in action

    Get PDF
    Background: There is increasing recognition among trialists of the challenges in understanding how particular 'real-life' contexts influence the delivery and receipt of complex health interventions. Evaluations of interventions to change health worker and/or patient behaviours in health service settings exemplify these challenges. When interpreting evaluation data, deviation from intended intervention implementation is accounted for through process evaluations of fidelity, reach, and intensity. However, no such systematic approach has been proposed to account for the way evaluation activities may deviate in practice from assumptions made when data are interpreted.Methods: A collective case study was conducted to explore experiences of undertaking evaluation activities in the real-life contexts of nine complex intervention trials seeking to improve appropriate diagnosis and treatment of malaria in varied health service settings. Multiple sources of data were used, including in-depth interviews with investigators, participant-observation of studies, and rounds of discussion and reflection.Results and discussion: From our experiences of the realities of conducting these evaluations, we identified six key 'lessons learned' about ways to become aware of and manage aspects of the fabric of trials involving the interface of researchers, fieldworkers, participants and data collection tools that may affect the intended production of data and interpretation of findings. These lessons included: foster a shared understanding across the study team of how individual practices contribute to the study goals; promote and facilitate within-team communications for ongoing reflection on the progress of the evaluation; establish processes for ongoing collaboration and dialogue between sub-study teams; the importance of a field research coordinator bridging everyday project management with scientific oversight; collect and review reflective field notes on the progress of the evaluation to aid interpretation of outcomes; and these approaches should help the identification of and reflection on possible overlaps between the evaluation and intervention.Conclusion: The lessons we have drawn point to the principle of reflexivity that, we argue, needs to become part of standard practice in the conduct of evaluations of complex interventions to promote more meaningful interpretations of the effects of an intervention and to better inform future implementation and decision-making. © 2014 Reynolds et al.; licensee BioMed Central Ltd

    Stress and breast cancer: from epidemiology to molecular biology

    Get PDF
    Stress exposure has been proposed to contribute to the etiology of breast cancer. However, the validity of this assertion and the possible mechanisms involved are not well established. Epidemiologic studies differ in their assessment of the relative contribution of stress to breast cancer risk, while physiological studies propose a clear connection but lack the knowledge of intracellular pathways involved. The present review aims to consolidate the findings from different fields of research (including epidemiology, physiology, and molecular biology) in order to present a comprehensive picture of what we know to date about the role of stress in breast cancer development

    Procedure versus process: ethical paradigms and the conduct of qualitative research

    Get PDF

    The babel of drugs: On the consequences of evidential pluralism in pharmaceutical regulation and regulatory data journeys

    Get PDF
    This is the final version. Available on open access from Springer via the DOI in this recordThroughout the last century, pharmaceutical regulators all over the world have used various methods to test medical treatments. From 1962 until 2016, the Randomized Clinical Trial (RCT) was the reference test for most regulatory agencies. Today, the standards are about to change, and in this chapter we draw on the idea of the data journey to illuminate the trade-offs involved. The 21st Century Cures Act (21CCA) allows for the use of Electronic Health Records (EHRs) for the assessment of different treatment indications for already approved drugs. This might arguably shorten the testing period, bringing treatments to patients faster. Yet, EHR are not generated for testing purposes and no amount of standardization and curation can fully make up for their potential flaws as evidence of safety and efficacy. The more noise in the data, the more mistakes regulators are likely to make in granting market access to new drugs. In this paper we will discuss the different dimensions of this journey: the different sources and levels of curation involved, the speed at which they can travel, and the level of risk of regulatory error involved as compared with the RCT standard. We are going to defend that what counts as evidence, at the end of the journey, depends on the risk definition and threshold regulators work with.European Research Council (ERC)Engineering and Physical Sciences Research Council (EPSRC
    corecore