27,179 research outputs found

    The sensitivity of landscape evolution models to spatial and temporal rainfall resolution

    Get PDF
    © Author(s) 2016. Climate is one of the main drivers for landscape evolution models (LEMs), yet its representation is often basic with values averaged over long time periods and frequently lumped to the same value for the whole basin. Clearly, this hides the heterogeneity of precipitation - but what impact does this averaging have on erosion and deposition, topography, and the final shape of LEM landscapes? This paper presents results from the first systematic investigation into how the spatial and temporal resolution of precipitation affects LEM simulations of sediment yields and patterns of erosion and deposition. This is carried out by assessing the sensitivity of the CAESAR-Lisflood LEM to different spatial and temporal precipitation resolutions - as well as how this interacts with different-size drainage basins over short and long timescales. A range of simulations were carried out, varying rainfall from 0.25 h × 5 km to 24 h × Lump resolution over three different-sized basins for 30-year durations. Results showed that there was a sensitivity to temporal and spatial resolution, with the finest leading to & gt; 100 % increases in basin sediment yields. To look at how these interactions manifested over longer timescales, several simulations were carried out to model a 1000-year period. These showed a systematic bias towards greater erosion in uplands and deposition in valley floors with the finest spatial- and temporal-resolution data. Further tests showed that this effect was due solely to the data resolution, not orographic factors. Additional research indicated that these differences in sediment yield could be accounted for by adding a compensation factor to the model sediment transport law. However, this resulted in notable differences in the topographies generated, especially in third-order and higher streams. The implications of these findings are that uncalibrated past and present LEMs using lumped and time-averaged climate inputs may be under-predicting basin sediment yields as well as introducing spatial biases through under-predicting erosion in first-order streams but over-predicting erosion in second- and third-order streams and valley floor areas. Calibrated LEMs may give correct sediment yields, but patterns of erosion and deposition will be different and the calibration may not be correct for changing climates. This may have significant impacts on the modelled basin profile and shape from long-timescale simulations

    X-ray interferometry with transmissive beam combiners for ultra-high angular resolution astronomy

    Full text link
    Interferometry provides one of the possible routes to ultra-high angular resolution for X-ray and gamma-ray astronomy. Sub-micro-arc-second angular resolution, necessary to achieve objectives such as imaging the regions around the event horizon of a super-massive black hole at the center of an active galaxy, can be achieved if beams from parts of the incoming wavefront separated by 100s of meters can be stably and accurately brought together at small angles. One way of achieving this is by using grazing incidence mirrors. We here investigate an alternative approach in which the beams are recombined by optical elements working in transmission. It is shown that the use of diffractive elements is a particularly attractive option. We report experimental results from a simple 2-beam interferometer using a low-cost commercially available profiled film as the diffractive elements. A rotationally symmetric filled (or mostly filled) aperture variant of such an interferometer, equivalent to an X-ray axicon, is shown to offer a much wider bandpass than either a Phase Fresnel Lens (PFL) or a PFL with a refractive lens in an achromatic pair. Simulations of an example system are presented.Comment: To be published in "Experimental Astronomy

    A Chain-Boson Model for the Decoherence and Relaxation of a Few Coupled SQUIDs in a Phonon Bath

    Full text link
    We develop a "chain-boson model" master equation, within the Born-Markov approximation, for a few superconducting quantum interference devices (SQUIDs) coupled into a chain and exchanging their angular momenta with a low temperature phonon bath. Our master equation has four generators; we concentrate on the damping and diffusion and use them to study the relaxation and decoherence of a Heisenberg SQUID chain whose spectrum exhibits critical point energy-level crossings, entangled states, and pairs of resonant transitions. We note that at an energy-level crossing the relevant bath wavelengths are so long that even well-spaced large SQUIDs can partially exhibit collective coupling to the bath, dramatically reducing certain relaxation and decoherence rates. Also, transitions into entangled states can occur even in the case of an independent coupling of each SQUID to the bath. Finally, the pairs of resonant transitions can cause decaying oscillations to emerge in a lower energy subspace.Comment: 13 pages, 8 figure

    Non-response biases in surveys of schoolchildren: the case of the English Programme for International Student Assessment (PISA) samples

    Get PDF
    We analyse response patterns to an important survey of schoolchildren, exploiting rich auxiliary information on respondents' and non-respondents' cognitive ability that is correlated both with response and the learning achievement that the survey aims to measure. The survey is the Programme for International Student Assessment (PISA), which sets response thresholds in an attempt to control the quality of data. We analyse the case of England for 2000, when response rates were deemed sufficiently high by the organizers of the survey to publish the results, and 2003, when response rates were a little lower and deemed of sufficient concern for the results not to be published. We construct weights that account for the pattern of non-response by using two methods: propensity scores and the generalized regression estimator. There is clear evidence of biases, but there is no indication that the slightly higher response rates in 2000 were associated with higher quality data. This underlines the danger of using response rate thresholds as a guide to quality of data

    The Efficiency Gains from Dynamic Tax Reform

    Get PDF
    This paper presents a new simulation methodology for determining the pure efficiency gains from tax reform along the general. equilibrium rational expectations growth path of life cycle economies. The principal findings concern the effects of switching from a proportional income tax with rates similar to those in the U.S. to either a proportional tax on consumption or a proportional tax on labor income. A switch to consumption taxation generates a sustainable welfare gain of almost 2 percent of lifetime resources. In contrast, a transition to wage taxation generates a loss of greater than ? percent of lifetime re- sources. A second general result is that even a mild degree of progressivity in the income tax system imposes a very large efficiency cost.

    Should One Use the Ray-by-Ray Approximation in Core-Collapse Supernova Simulations?

    Full text link
    We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (F{\sc{ornax}}) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12-, 15-, 20-, and 25-M_{\odot} progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive use of the ray-by-ray+ approach. Employing it leads to maximum post-bounce/pre-explosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more "explodable." In fact, for our 25-M_{\odot} progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.Comment: Updated and revised text; 13 pages; 13 figures; Accepted to Ap.

    The Fantastic Four: A plug 'n' play set of optimal control pulses for enhancing nmr spectroscopy

    Full text link
    We present highly robust, optimal control-based shaped pulses designed to replace all 90{\deg} and 180{\deg} hard pulses in a given pulse sequence for improved performance. Special attention was devoted to ensuring that the pulses can be simply substituted in a one-to-one fashion for the original hard pulses without any additional modification of the existing sequence. The set of four pulses for each nucleus therefore consists of 90{\deg} and 180{\deg} point-to-point (PP) and universal rotation (UR) pulses of identical duration. These 1 ms pulses provide uniform performance over resonance offsets of 20 kHz (1H) and 35 kHz (13C) and tolerate reasonably large radio frequency (RF) inhomogeneity/miscalibration of (+/-)15% (1H) and (+/-)10% (13C), making them especially suitable for NMR of small-to-medium-sized molecules (for which relaxation effects during the pulse are negligible) at an accessible and widely utilized spectrometer field strength of 600 MHz. The experimental performance of conventional hard-pulse sequences is shown to be greatly improved by incorporating the new pulses, each set referred to as the Fantastic Four (Fanta4).Comment: 28 pages, 19 figure

    Activation of Long Descending Propriospinal Neurons in Cat Spinal Cord

    Get PDF
    Isolated mammalian spinal cord has been shown capable of generating locomotor activity. Propriospinal systems assumed to coordinate fore- and hindlimb activity are poorly understood. This study characterizes the long descending propriospinal (LDP) neurons in terms of the location of the somas and their peripheral inputs by direct neuronal recording. Anatomical studies using axonal retrograde transport of horseradish peroxidase from the lumbar to the cervical spinal cord as a tracer first described these neurons. Two hundred and thirty-one LDP neurons were identified in electrophysiological experiments. Of these, 123 responded to natural stimulation, and about 50% of the others were activated only by electrical stimulation. The majority of cells were located in laminae VII and VIII in agreement with anatomical data. The most effective stimuli were mechanical stimulation of skin, deep pressure to subcutaneous tissues, and paw joint movement. Bot excitatory and inhibitory responses were observed

    Sisyphus effects in a microwave-excited flux-qubit resonator system

    Get PDF
    Sisyphus amplification, familiar from quantum optics, has recently been reported as a mechanism to explain the enhanced quality factor of a classical resonant (tank) circuit coupled to a superconducting flux qubit. Here we present data from a coupled system, comprising a quantum mechanical rf SQUID (flux qubit) reactively monitored by an ultrahigh quality factor noise driven rf resonator and excited by microwaves. The system exhibits enhancement of the tank-circuit resonance, bringing it significantly closer (within 1%) to the lasing limit, than previously reported results. 2010 The American Physical Society

    The problem of shot selection in basketball

    Get PDF
    In basketball, every time the offense produces a shot opportunity the player with the ball must decide whether the shot is worth taking. In this paper, I explore the question of when a team should shoot and when they should pass up the shot by considering a simple theoretical model of the shot selection process, in which the quality of shot opportunities generated by the offense is assumed to fall randomly within a uniform distribution. I derive an answer to the question "how likely must the shot be to go in before the player should take it?", and show that this "lower cutoff" for shot quality ff depends crucially on the number nn of shot opportunities remaining (say, before the shot clock expires), with larger nn demanding that only higher-quality shots should be taken. The function f(n)f(n) is also derived in the presence of a finite turnover rate and used to predict the shooting rate of an optimal-shooting team as a function of time. This prediction is compared to observed shooting rates from the National Basketball Association (NBA), and the comparison suggests that NBA players tend to wait too long before shooting and undervalue the probability of committing a turnover.Comment: 7 pages, 2 figures; comparison to NBA data adde
    corecore