24,658 research outputs found

    Eliminating the Hadronic Uncertainty

    Get PDF
    The Standard Model Lagrangian requires the values of the fermion masses, the Higgs mass and three other experimentally well-measured quantities as input in order to become predictive. These are typically taken to be α\alpha, GμG_\mu and MZM_Z. Using the first of these, however, introduces a hadronic contribution that leads to a significant error. If a quantity could be found that was measured at high energy with sufficient precision then it could be used to replace α\alpha as input. The level of precision required for this to happen is given for a number of precisely-measured observables. The WW boson mass must be measured with an error of ±13\pm13\,MeV, ΓZ\Gamma_Z to 0.70.7\,MeV and polarization asymmetry, ALRA_{LR}, to ±0.002\pm0.002 that would seem to be the most promising candidate. The r\^ole of renormalized parameters in perturbative calculations is reviewed and the value for the electromagnetic coupling constant in the MS\overline{\rm MS} renormalization scheme that is consistent with all experimental data is obtained to be αMS1(MZ2)=128.17\alpha^{-1}_{\overline{\rm MS}}(M^2_Z)=128.17.Comment: 8 pages LaTeX2

    Additional support for the TDK/MABL computer program

    Get PDF
    An advanced version of the Two-Dimensional Kinetics (TDK) computer program was developed under contract and released to the propulsion community in early 1989. Exposure of the code to this community indicated a need for improvements in certain areas. In particular, the TDK code needed to be adapted to the special requirements imposed by the Space Transportation Main Engine (STME) development program. This engine utilizes injection of the gas generator exhaust into the primary nozzle by means of a set of slots. The subsequent mixing of this secondary stream with the primary stream with finite rate chemical reaction can have a major impact on the engine performance and the thermal protection of the nozzle wall. In attempting to calculate this reacting boundary layer problem, the Mass Addition Boundary Layer (MABL) module of TDK was found to be deficient in several respects. For example, when finite rate chemistry was used to determine gas properties, (MABL-K option) the program run times became excessive because extremely small step sizes were required to maintain numerical stability. A robust solution algorithm was required so that the MABL-K option could be viable as a rocket propulsion industry design tool. Solving this problem was a primary goal of the phase 1 work effort

    Ignorance is bliss: General and robust cancellation of decoherence via no-knowledge quantum feedback

    Get PDF
    A "no-knowledge" measurement of an open quantum system yields no information about any system observable; it only returns noise input from the environment. Surprisingly, performing such a no-knowledge measurement can be advantageous. We prove that a system undergoing no-knowledge monitoring has reversible noise, which can be cancelled by directly feeding back the measurement signal. We show how no-knowledge feedback control can be used to cancel decoherence in an arbitrary quantum system coupled to a Markovian reservoir that is being monitored. Since no-knowledge feedback does not depend on the system state or Hamiltonian, such decoherence cancellation is guaranteed to be general, robust and can operate in conjunction with any other quantum control protocol. As an application, we show that no-knowledge feedback could be used to improve the performance of dissipative quantum computers subjected to local loss.Comment: 6 pages + 2 pages supplemental material, 3 figure

    Controlling chaos in the quantum regime using adaptive measurements

    Get PDF
    The continuous monitoring of a quantum system strongly influences the emergence of chaotic dynamics near the transition from the quantum regime to the classical regime. Here we present a feedback control scheme that uses adaptive measurement techniques to control the degree of chaos in the driven-damped quantum Duffing oscillator. This control relies purely on the measurement backaction on the system, making it a uniquely quantum control, and is only possible due to the sensitivity of chaos to measurement. We quantify the effectiveness of our control by numerically computing the quantum Lyapunov exponent over a wide range of parameters. We demonstrate that adaptive measurement techniques can control the onset of chaos in the system, pushing the quantum-classical boundary further into the quantum regime

    Robustness of System-Filter Separation for the Feedback Control of a Quantum Harmonic Oscillator Undergoing Continuous Position Measurement

    Get PDF
    We consider the effects of experimental imperfections on the problem of estimation-based feedback control of a trapped particle under continuous position measurement. These limitations violate the assumption that the estimator (i.e. filter) accurately models the underlying system, thus requiring a separate analysis of the system and filter dynamics. We quantify the parameter regimes for stable cooling, and show that the control scheme is robust to detector inefficiency, time delay, technical noise, and miscalibrated parameters. We apply these results to the specific context of a weakly interacting Bose-Einstein condensate (BEC). Given that this system has previously been shown to be less stable than a feedback-cooled BEC with strong interatomic interactions, this result shows that reasonable experimental imperfections do not limit the feasibility of cooling a BEC by continuous measurement and feedback.Comment: 14 pages, 8 figure

    Natural age dispersion arising from the analysis of broken crystals, part I. Theoretical basis and implications for the apatite (U-Th)/He thermochronometer

    Get PDF
    Over the last decade major progress has been made in developing both the theoretical and practical aspects of apatite (U-Th)/He thermochronometry and it is now standard practice, and generally seen as best practice, to analyse single grain aliquots. These individual prismatic crystals are often broken and are fragments of larger crystals that have broken during mineral separation along the weak basal cleavage in apatite. This is clearly indicated by the common occurrence of only 1 or no clear crystal terminations present on separated apatite grains, and evidence of freshly broken ends when grains are viewed using a scanning electron microscope. This matters because if the 4He distribution within the whole grain is not homogeneous, because of partial loss due to thermal diffusion for example, then the fragments will all yield ages different from each other and from the whole grain age. Here we use a numerical model with a finite cylinder geometry to approximate 4He ingrowth and thermal diffusion within hexagonal prismatic apatite crystals. This is used to quantify the amount and patterns of inherent, natural age dispersion that arises from analysing broken crystals. A series of systematic numerical experiments were conducted to explore and quantify the pattern and behaviour of this source of dispersion using a set of 5 simple thermal histories that represent a range of plausible geological scenarios. In addition some more complex numerical experiments were run to investigate the pattern and behaviour of grain dispersion seen in several real data sets. The results indicate that natural dispersion of a set of single fragment ages (defined as the range divided by the mean) arising from fragmentation alone varies from c. 7% even for rapid (c. 10 ∘C/Ma), monotonic cooling to over 50% for protracted, complex histories that cause significant diffusional loss of 4He. The magnitude of dispersion arising from fragmentation scales with the grain cylindrical radius, and is of a similar magnitude to dispersion expected from differences in absolute grain size alone (spherical equivalent radii of 40 to 150 μm). This source of dispersion is significant compared with typical analytical uncertainties on individual grain analyses (c. 6%) and standard deviations on multiple grain analyses from a single sample (c. 10-20%). Where there is a significant difference in the U and Th concentration of individual grains (eU), the effect of radiation damage accumulation on 4He diffusivity (assessed using the RDAAM model of Flowers et al. (2009)) is the primary cause of dispersion for samples that have experienced a protracted thermal history, and can cause dispersion in excess of 100% for realistic ranges of eU conentration (i.e. 5-100 ppm). Expected natural dispersion arising from the combined effects of reasonable variations in grain size (radii 40-125 μm), eU concentration (5-150 ppm) and fragmentation would typically exceed 100% for complex thermal histories. In addition to adding a significant component of natural dispersion to analyses, the effect of fragmentation also acts to decouple and corrupt expected correlations between grain ages and absolute grain size and to a lesser extent between grain age and effective uranium concentration (eU). Considering fragmentation explicitly as a source of dispersion and analysing how the different sources of natural dispersion all interact with each other provides a quantitative framework for understanding patterns of dispersion that otherwise appear chaotic. An important outcome of these numerical experiments is that they demonstrate that the pattern of age dispersion arising from fragmentation mimics the pattern of 4He distribution within the whole grains, thus providing an important source of information about the thermal history of the sample. We suggest that if the primary focus of a study is to extract the thermal history information from (U-Th)/He analyses then sampling and analytical strategies should aim to maximise the natural dispersion of grain ages, not minimise it, and should aim to analyse circa 20-30 grains from each sample. The key observations and conclusions drawn here are directly applicable to other thermochronometers, such as the apatite, rutile and titanite U-Pb systems, where the diffusion domain is approximated by the physical grain size

    A study of the relationship between macroscopic measures and physical processes occurring during crack closure

    Get PDF
    Issued as Fiscal year report, Annual reports [nos. 1-3], and Final report, Project E-18-665 (subproject: E-18-666

    Crowding Out Voluntary Contributions to Public Goods

    Get PDF
    We test the null hypothesis that involuntary transfers for the provision of a public good will completely crowd out voluntary transfers against the warm-glow hypothesis that crowding-out will be incomplete because individuals care about giving. Our design differs from the related design used by Andreoni in considering two levels of the involuntary transfer and a wider range of contribution possibilities, and in mixing groups every period instead of every four periods. We analyse the data with careful attention to boundary effects. We retain the null hypothesis of complete crowding-out in two of three pairwise comparisions, but reject it in favour of incomplete crowding-out in the comparison most closely akin to Andreoni's design. Thus we confirm the existence of incomplete crowding-out in some environments, but suggest that the warm-glow hypothesis is inadequate in explaining it.
    corecore