3,161 research outputs found

    The effectiveness of full actinide recycle as a nuclear waste management strategy when implemented over a limited timeframe - Part I: Uranium fuel cycle

    Get PDF
    Disposal of spent nuclear fuel is a major political and public-perception problem for nuclear energy. From a radiological standpoint, the long-lived component of spent nuclear fuel primarily consists of transuranic (TRU) isotopes. Full recycling of TRU isotopes can, in theory, lead to a reduction in repository radiotoxicity to reference levels corresponds to the radiotoxicity of the unburned natural U required to fuel a conventional LWR in as little as ∌500 years provided reprocessing and fuel fabrication losses are limited. This strategy forms part of many envisaged ‘sustainable’ nuclear fuel cycles. However, over a limited timeframe, the radiotoxicity of the ‘final’ core can dominate over reprocessing losses, leading to a much lower reduction in radiotoxicity compared to that achievable at equilibrium. The importance of low reprocessing losses and minor actinide (MA) recycling is also dependent on the timeframe during which actinides are recycled. In this paper, the fuel cycle code ORION is used to model the recycle of light water reactor (LWR)-produced TRUs in LWRs and sodium-cooled fast reactors (SFRs) over 1–5 generations of reactors, which is sufficient to infer general conclusions for higher numbers of generations. Here, a generation is defined as a fleet of reactors operating for 60 years, before being retired and potentially replaced. Over up to ∌5 generations of full actinide recycle in SFR burners, the final core inventory tends to dominate over reprocessing losses, beyond which the radiotoxicity rapidly becomes sensitive to reprocessing losses. For a single generation of SFRs, there is little or no advantage to recycling MAs. However, for multiple generations, the reduction in repository radiotoxicity is severely limited without MA recycling, and repository radiotoxicity converges on equilibrium after around 3 generations of SFRs. With full actinide recycling, at least 6 generations of SFRs are required in a gradual phase-out of nuclear power to achieve transmutation performance approaching the theoretical equilibrium performance – which appears challenging from an economic and energy security standpoint. TRU recycle in pressurized water reactors (PWRs) with zero net actinide production provides similar performance to low-enriched-uranium (LEU)-fueled LWRs in equilibrium with a fleet of burner SFRs. However, it is not possible to reduce the TRU inventory over multiple generations of PWRs. TRU recycle in break-even SFRs is much less effective from a point of view of reducing spent nuclear fuel radiotoxicity.The first author would like to acknowledge the UK Engineering and Physical Sciences Research Council (EPSRC) and the Institution of Mechanical Engineers for providing funding towards this work.This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.pnucene.2015.07.02

    The effectiveness of full actinide recycle as a nuclear waste management strategy when implemented over a limited timeframe - Part II: Thorium fuel cycle

    Get PDF
    Full recycling of transuranic (TRU) isotopes can in theory lead to a reduction in repository radiotoxicity to reference levels in as little as ∌500 years provided reprocessing and fuel fabrication losses are limited. However, over a limited timeframe, the radiotoxicity of the ‘final’ core can dominate over reprocessing losses, leading to a much lower reduction in radiotoxicity compared to that achievable at equilibrium. In Part I of this paper, TRU recycle over up to 5 generations of light water reactors (LWRs) or sodium-cooled fast reactors (SFRs) is considered for uranium (U) fuel cycles. With full actinide recycling, at least 6 generations of SFRs are required in a gradual phase-out of nuclear power to achieve transmutation performance approaching the theoretical equilibrium performance. U-fuelled SFRs operating a break-even fuel cycle are not particularly effective at reducing repository radiotoxicity as the final core load dominates over a very long timeframe. In this paper, the analysis is extended to the thorium (Th) fuel cycle. Closed Th-based fuel cycles are well known to have lower equilibrium radiotoxicity than U-based fuel cycles but the time taken to reach equilibrium is generally very long. Th burner fuel cycles with SFRs are found to result in very similar radiotoxicity to U burner fuel cycles with SFRs for one less generation of reactors, provided that protactinium (Pa) is recycled. Th-fuelled reduced-moderation boiling water reactors (RBWRs) are also considered, but for burner fuel cycles their performance is substantially worse, with the waste taking ∌3–5 times longer to decay to the reference level than for Th-fuelled SFRs with the same number of generations. Th break-even fuel cycles require ∌3 generations of operation before their waste radiotoxicity benefits result in decay to the reference level in ∌1000 years. While this is a very long timeframe, it is roughly half that required for waste from the Th or U burner fuel cycle to decay to the reference level, and less than a tenth that required for the U break-even fuel cycle. The improved performance over burner fuel cycles is due to a more substantial contribution of energy generated by 233U leading to lower radiotoxicity per unit energy generation. To some extent this an argument based on how the radiotoxicity is normalised: operating a break-even fuel cycle rather than phasing out nuclear power using a burner fuel cycle results in higher repository radiotoxicity in absolute terms. The advantage of Th break-even fuel cycles is also contingent on recycling Pa, and reprocessing losses are significant also for a small number of generations due to the need to effectively burn down the TRU. The integrated decay heat over the scenario timeframe is almost twice as high for a break-even Th fuel cycle than a break-even U fuel cycle when using SFRs, as a result of much higher 90Sr production, which subsequently decays into 90Y. The peak decay heat is comparable. As decay heat at vitrification and repository decay heat affect repository sizing, this may weaken the argument for the Th cycle.The first author would like to acknowledge the UK Engineering and Physical Sciences Research Council (EPSRC) and the Institution of Mechanical Engineers for providing funding towards this work.This is the final version of the article. It first appeared from Elsevier at http://dx.doi.org/10.1016/j.pnucene.2014.11.01

    Subduction Duration and Slab Dip

    Get PDF
    The dip angles of slabs are among the clearest characteristics of subduction zones, but the factors that control them remain obscure. Here, slab dip angles and subduction parameters, including subduction duration, the nature of the overriding plate, slab age, and convergence rate, are determined for 153 transects along subduction zones for the present day. We present a comprehensive tabulation of subduction duration based on isotopic ages of arc initiation and stratigraphic, structural, plate tectonic and seismic indicators of subduction initiation. We present two ages for subduction zones, a long‐term age and a reinitiation age. Using cross correlation and multivariate regression, we find that (1) subduction duration is the primary parameter controlling slab dips with slabs tending to have shallower dips at subduction zones that have been in existence longer; (2) the long‐term age of subduction duration better explains variation of shallow dip than reinitiation age; (3) overriding plate nature could influence shallow dip angle, where slabs below continents tend to have shallower dips; (4) slab age contributes to slab dip, with younger slabs having steeper shallow dips; and (5) the relations between slab dip and subduction parameters are depth dependent, where the ability of subduction duration and overriding plate nature to explain observed variation decreases with depth. The analysis emphasizes the importance of subduction history and the long‐term regional state of a subduction zone in determining slab dip and is consistent with mechanical models of subduction

    Harold Jeffreys's Theory of Probability Revisited

    Full text link
    Published exactly seventy years ago, Jeffreys's Theory of Probability (1939) has had a unique impact on the Bayesian community and is now considered to be one of the main classics in Bayesian Statistics as well as the initiator of the objective Bayes school. In particular, its advances on the derivation of noninformative priors as well as on the scaling of Bayes factors have had a lasting impact on the field. However, the book reflects the characteristics of the time, especially in terms of mathematical rigor. In this paper we point out the fundamental aspects of this reference work, especially the thorough coverage of testing problems and the construction of both estimation and testing noninformative priors based on functional divergences. Our major aim here is to help modern readers in navigating in this difficult text and in concentrating on passages that are still relevant today.Comment: This paper commented in: [arXiv:1001.2967], [arXiv:1001.2968], [arXiv:1001.2970], [arXiv:1001.2975], [arXiv:1001.2985], [arXiv:1001.3073]. Rejoinder in [arXiv:0909.1008]. Published in at http://dx.doi.org/10.1214/09-STS284 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The Impact of a Graded Maximal Exercise Protocol on Exhaled Volatile Organic Compounds:A Pilot Study

    Get PDF
    Exhaled volatile organic compounds (VOCs) are of interest due to their minimally invasive sampling procedure. Previous studies have investigated the impact of exercise, with evidence suggesting that breath VOCs reflect exercise-induced metabolic activity. However, these studies have yet to investigate the impact of maximal exercise to exhaustion on breath VOCs, which was the main aim of this study. Two-litre breath samples were collected onto thermal desorption tubes using a portable breath collection unit. Samples were collected pre-exercise, and at 10 and 60 min following a maximal exercise test (VO2MAX). Breath VOCs were analysed by thermal desorption-gas chromatography-mass spectrometry using a non-targeted approach. Data showed a tendency for reduced isoprene in samples at 10 min post-exercise, with a return to baseline by 60 min. However, inter-individual variation meant differences between baseline and 10 min could not be confirmed, although the 10 and 60 min timepoints were different (p = 0.041). In addition, baseline samples showed a tendency for both acetone and isoprene to be reduced in those with higher absolute VO2MAX scores (mL(O2)/min), although with restricted statistical power. Baseline samples could not differentiate between relative VO2MAX scores (mL(O2)/kg/min). In conclusion, these data support that isoprene levels are dynamic in response to exercise.</p

    The power of Bayesian evidence in astronomy

    Full text link
    We discuss the use of the Bayesian evidence ratio, or Bayes factor, for model selection in astronomy. We treat the evidence ratio as a statistic and investigate its distribution over an ensemble of experiments, considering both simple analytical examples and some more realistic cases, which require numerical simulation. We find that the evidence ratio is a noisy statistic, and thus it may not be sensible to decide to accept or reject a model based solely on whether the evidence ratio reaches some threshold value. The odds suggested by the evidence ratio bear no obvious relationship to the power or Type I error rate of a test based on the evidence ratio. The general performance of such tests is strongly affected by the signal to noise ratio in the data, the assumed priors, and the threshold in the evidence ratio that is taken as `decisive'. The comprehensiveness of the model suite under consideration is also very important. The usefulness of the evidence ratio approach in a given problem can be assessed in advance of the experiment, using simple models and numerical approximations. In many cases, this approach can be as informative as a much more costly full-scale Bayesian analysis of a complex problem.Comment: 11 pages; MNRAS in pres

    Plausibility functions and exact frequentist inference

    Full text link
    In the frequentist program, inferential methods with exact control on error rates are a primary focus. The standard approach, however, is to rely on asymptotic approximations, which may not be suitable. This paper presents a general framework for the construction of exact frequentist procedures based on plausibility functions. It is shown that the plausibility function-based tests and confidence regions have the desired frequentist properties in finite samples---no large-sample justification needed. An extension of the proposed method is also given for problems involving nuisance parameters. Examples demonstrate that the plausibility function-based method is both exact and efficient in a wide variety of problems.Comment: 21 pages, 5 figures, 3 table

    6-PACK programme to decrease fall injuries in acute hospitals: Cluster randomised controlled trial

    Get PDF
    Objective: To evaluate the effect of the 6-PACK programme on falls and fall injuries in acute wards. Design: Cluster randomised controlled trial. Setting: Six Australian hospitals. Participants: All patients admitted to 24 acute wards during the trial period. Interventions: Participating wards were randomly assigned to receive either the nurse led 6-PACK programme or usual care over 12 months. The 6-PACK programme included a fall risk tool and individualised use of one or more of six interventions: “falls alert” sign, supervision of patients in the bathroom, ensuring patients’ walking aids are within reach, a toileting regimen, use of a low-low bed, and use of a bed/chair alarm. Main outcome measures: The co-primary outcomes were falls and fall injuries per 1000 occupied bed days. Results: During the trial, 46 245 admissions to 16 medical and eight surgical wards occurred. As many people were admitted more than once, this represented 31 411 individual patients. Patients’ characteristics and length of stay were similar for intervention and control wards. Use of 6-PACK programme components was higher on intervention wards than on control wards (incidence rate ratio 3.05, 95% confidence interval 2.14 to 4.34; P<0.001). In all, 1831 falls and 613 fall injuries occurred, and the rates of falls (incidence rate ratio 1.04, 0.78 to 1.37; P=0.796) and fall injuries (0.96, 0.72 to 1.27; P=0.766) were similar in intervention and control wards. Conclusions: Positive changes in falls prevention practice occurred following the introduction of the 6-PACK programme. However, no difference was seen in falls or fall injuries between groups. High quality evidence showing the effectiveness of falls prevention interventions in acute wards remains absent. Novel solutions to the problem of in-hospital falls are urgently needed

    Minimum Decision Cost for Quantum Ensembles

    Get PDF
    For a given ensemble of NN independent and identically prepared particles, we calculate the binary decision costs of different strategies for measurement of polarised spin 1/2 particles. The result proves that, for any given values of the prior probabilities and any number of constituent particles, the cost for a combined measurement is always less than or equal to that for any combination of separate measurements upon sub-ensembles. The Bayes cost, which is that associated with the optimal strategy (i.e., a combined measurement) is obtained in a simple closed form.Comment: 11 pages, uses RevTe
    • 

    corecore