13,515 research outputs found

    Should One Use the Ray-by-Ray Approximation in Core-Collapse Supernova Simulations?

    Full text link
    We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (F{\sc{ornax}}) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12-, 15-, 20-, and 25-M_{\odot} progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive use of the ray-by-ray+ approach. Employing it leads to maximum post-bounce/pre-explosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more "explodable." In fact, for our 25-M_{\odot} progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.Comment: Updated and revised text; 13 pages; 13 figures; Accepted to Ap.

    Local Volume Effects in the Generalized Pseudopotential Theory

    Get PDF
    The generalized pseudopotential theory (GPT) is a powerful method for deriving real-space transferable interatomic potentials. Using a coarse-grained electronic structure, one can explicitly calculate the pair ion-ion and multi-ion interactions in simple and transition metals. Whilst successful in determining bulk properties, in central force metals the GPT fails to describe crystal defects for which there is a significant local volume change. A previous paper [PhysRevLett.66.3036 (1991)] found that by allowing the GPT total energy to depend upon some spatially-averaged local electron density, the energetics of vacancies and surfaces could be calculated within experimental ranges. In this paper, we develop the formalism further by explicitly calculating the forces and stress tensor associated with this total energy. We call this scheme the adaptive GPT (aGPT) and it is capable of both molecular dynamics and molecular statics. We apply the aGPT to vacancy formation and divacancy binding in hcp Mg and also calculate the local electron density corrections to the bulk elastic constants and phonon dispersion for which there is refinement over the baseline GPT treatment.Comment: 11 pages, 6 figure

    Where do uncertainties reside within environmental risk assessments? Expert opinion on uncertainty distributions for pesticide risks to surface water organisms

    Get PDF
    A reliable characterisation of uncertainties can aid uncertainty identification during environmental risk assessments (ERAs). However, typologies can be implemented inconsistently, causing uncertainties to go unidentified. We present an approach based on nine structured elicitations, in which subject-matter experts, for pesticide risks to surface water organisms, validate and assess three dimensions of uncertainty: its level (the severity of uncertainty, ranging from determinism to ignorance); nature (whether the uncertainty is epistemic or aleatory); and location (the data source or area in which the uncertainty arises). Risk characterisation contains the highest median levels of uncertainty, associated with estimating, aggregating and evaluating the magnitude of risks. Regarding the locations in which uncertainty is manifest, data uncertainty is dominant in problem formulation, exposure assessment and effects assessment. The comprehensive description of uncertainty described will enable risk analysts to prioritise the required phases, groups of tasks, or individual tasks within a risk analysis according to the highest levels of uncertainty, the potential for uncertainty to be reduced or quantified, or the types of location-based uncertainty, thus aiding uncertainty prioritisation during environmental risk assessments. In turn, it is expected to inform investment in uncertainty reduction or targeted risk management action

    Theory of hopping conduction in arrays of doped semiconductor nanocrystals

    Full text link
    The resistivity of a dense crystalline array of semiconductor nanocrystals (NCs) depends in a sensitive way on the level of doping as well as on the NC size and spacing. The choice of these parameters determines whether electron conduction through the array will be characterized by activated nearest-neighbor hopping or variable-range hopping (VRH). Thus far, no general theory exists to explain how these different behaviors arise at different doping levels and for different types of NCs. In this paper we examine a simple theoretical model of an array of doped semiconductor NCs that can explain the transition from activated transport to VRH. We show that in sufficiently small NCs, the fluctuations in donor number from one NC to another provide sufficient disorder to produce charging of some NCs, as electrons are driven to vacate higher shells of the quantum confinement energy spectrum. This confinement-driven charging produces a disordered Coulomb landscape throughout the array and leads to VRH at low temperature. We use a simple computer simulation to identify different regimes of conduction in the space of temperature, doping level, and NC diameter. We also discuss the implications of our results for large NCs with external impurity charges and for NCs that are gated electrochemically.Comment: 14 pages, 10 figures; extra schematic figures added; revised introductio

    The problem of shot selection in basketball

    Get PDF
    In basketball, every time the offense produces a shot opportunity the player with the ball must decide whether the shot is worth taking. In this paper, I explore the question of when a team should shoot and when they should pass up the shot by considering a simple theoretical model of the shot selection process, in which the quality of shot opportunities generated by the offense is assumed to fall randomly within a uniform distribution. I derive an answer to the question "how likely must the shot be to go in before the player should take it?", and show that this "lower cutoff" for shot quality ff depends crucially on the number nn of shot opportunities remaining (say, before the shot clock expires), with larger nn demanding that only higher-quality shots should be taken. The function f(n)f(n) is also derived in the presence of a finite turnover rate and used to predict the shooting rate of an optimal-shooting team as a function of time. This prediction is compared to observed shooting rates from the National Basketball Association (NBA), and the comparison suggests that NBA players tend to wait too long before shooting and undervalue the probability of committing a turnover.Comment: 7 pages, 2 figures; comparison to NBA data adde

    Can we predict the duration of an interglacial?

    Get PDF
    Differences in the duration of interglacials have long been apparent in palaeoclimate records of the Late and Middle Pleistocene. However, a systematic evaluation of such differences has been hampered by the lack of a metric that can be applied consistently through time and by difficulties in separating the local from the global component in various proxies. This, in turn, means that a theoretical framework with predictive power for interglacial duration has remained elusive. Here we propose that the interval between the terminal oscillation of the bipolar seesaw and three thousand years (kyr) before its first major reactivation provides an estimate that approximates the length of the sea-level highstand, a measure of interglacial duration. We apply this concept to interglacials of the last 800 kyr by using a recently-constructed record of interhemispheric variability. The onset of interglacials occurs within 2 kyr of the boreal summer insolation maximum/precession minimum and is consistent with the canonical view of Milankovitch forcing pacing the broad timing of interglacials. Glacial inception always takes place when obliquity is decreasing and never after the obliquity minimum. The phasing of precession and obliquity appears to influence the persistence of interglacial conditions over one or two insolation peaks, leading to shorter (~ 13 kyr) and longer (~ 28 kyr) interglacials. Glacial inception occurs approximately 10 kyr after peak interglacial conditions in temperature and CO2, representing a characteristic timescale of interglacial decline. Second-order differences in duration may be a function of stochasticity in the climate system, or small variations in background climate state and the magnitude of feedbacks and mechanisms contributing to glacial inception, and as such, difficult to predict. On the other hand, the broad duration of an interglacial may be determined by the phasing of astronomical parameters and the history of insolation, rather than the instantaneous forcing strength at inception

    SUPPLY RESPONSE UNDER THE 1996 FARM ACT AND IMPLICATIONS FOR THE U.S. FIELD CROPS SECTOR

    Get PDF
    The 1996 Farm Act gives farmers almost complete planting flexibility, allowing producers to respond to price changes to a greater extent than they had under previous legislation. This study measures supply responsiveness for major field crops to changes in their own prices and in prices for competing crops and indicates significant increases in responsiveness. Relative to 1986-90, the percentage increases in the responsiveness of U.S. plantings of major field crops to a 1-percent change in their own prices are wheat (1.2 percent), corn (41.6 percent), soybeans (13.5 percent), and cotton (7.9 percent). In percentage terms, the increases in the responsiveness generally become greater with respect to competing crops' price changes. The 1996 legislation has the least effect on U.S. wheat acreage, whereas the law may lead to an average increase of 2 million acres during 1996-2005 in soybean acreage, a decline of 1-2 million acres in corn acreage, and an increase of 0.7 million acres in cotton acreage. Overall, the effect of the farm legislation on regional production patterns of major field crops appears to be modest. Corn acreage expansion in the Central and Northern Plains, a long-term trend in this important wheat production region, will slow under the 1996 legislation, while soybean acreage expansion in this region will accelerate. The authors used the Policy Analysis System-Economic Research Service (POLYSYS-ERS) model that was jointly developed by USDA's Economic Research Service and the University of Tennessee's Agricultural Policy Analysis Center to estimate the effects of the 1996 legislation.Supply response, major field crops, acreage price elasticities, normal flex acreage (NFA), 1996 farm legislation., Agricultural and Food Policy, Crop Production/Industries,
    corecore