24,707 research outputs found

    Geology of the Snap Lake kimberlite intrusion, Northwest Territories, Canada: field observations and their interpretation

    No full text
    The Cambrian (523 Ma) Snap Lake hypabyssal kimberlite intrusion, Northwest Territories, Canada, is a complex segmented diamond-bearing ore-body. Detailed geological investigations suggest that the kimberlite is a multi-phase intrusion with at least four magmatic lithofacies. In particular, olivine-rich (ORK) and olivine-poor (OPK) varieties of hypabyssal kimberlite have been identified. Key observations are that the olivine-rich lithofacieshas a strong tendency to be located where the intrusion is thickest and that there is a good correlation between intrusion thickness, olivine crystal size and crystal content. Heterogeneities in the lithofacies are attributed to variations in intrusion thickness and structural complexities. The geometry and distribution of lithofacies points to magmaticco-intrusion, and flow segregation driven by fundamental rheological differences between the two phases. We envisage that the low-viscosity OPK magma acted as a lubricant for the highly viscous ORK magma. The presenceof such low-viscosity, crystal-poor magmas may explain how crystal-laden kimberlite magmas (>60 vol.%) are able to reach the surface during kimberlite eruptions. We also document the absence of crystal settling and the development of an unusual subvertical fabric of elongate olivine crystals, which are explained by rapid degassing-induced quench crystallization of the magmas during and after intrusio

    Persistent junk solutions in time-domain modeling of extreme mass ratio binaries

    Full text link
    In the context of metric perturbation theory for non-spinning black holes, extreme mass ratio binary (EMRB) systems are described by distributionally forced master wave equations. Numerical solution of a master wave equation as an initial boundary value problem requires initial data. However, because the correct initial data for generic-orbit systems is unknown, specification of trivial initial data is a common choice, despite being inconsistent and resulting in a solution which is initially discontinuous in time. As is well known, this choice leads to a "burst" of junk radiation which eventually propagates off the computational domain. We observe another unintended consequence of trivial initial data: development of a persistent spurious solution, here referred to as the Jost junk solution, which contaminates the physical solution for long times. This work studies the influence of both types of junk on metric perturbations, waveforms, and self-force measurements, and it demonstrates that smooth modified source terms mollify the Jost solution and reduce junk radiation. Our concluding section discusses the applicability of these observations to other numerical schemes and techniques used to solve distributionally forced master wave equations.Comment: Uses revtex4, 16 pages, 9 figures, 3 tables. Document reformatted and modified based on referee's report. Commentary added which addresses the possible presence of persistent junk solutions in other approaches for solving master wave equation

    Determination of polarized parton distribution functions with recent data on polarization asymmetries

    Full text link
    Global analysis has been performed within the next-to-leading order in Quantum Chromodynamics (QCD) to determine polarized parton distributions with new experimental data in spin asymmetries. The new data set includes JLab, HERMES, and COMPASS measurements on spin asymmetry A_1 for the neutron and deuteron in lepton scattering. Our new analysis also utilizes the double-spin asymmetry for pi^0 production in polarized pp collisions, A_{LL}^{pi^0}, measured by the PHENIX collaboration. Because of these new data, uncertainties of the polarized PDFs are reduced. In particular, the JLab, HERMES, and COMPASS measurements are valuable for determining Delta d_v(x) at large x and Delta qbar(x) at x~0.1. The PHENIX pi^0 data significantly reduce the uncertainty of Delta g(x). Furthermore, we discuss a possible constraint on Delta g(x) at large x by using the HERMES data on g_1^d in comparison with the COMPASS ones at x~0.05.Comment: 11 pages, REVTeX, 13 eps files, Phys. Rev. D in pres

    Fast prediction and evaluation of gravitational waveforms using surrogate models

    Get PDF
    [Abridged] We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and in more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced-order model that can be used as a surrogate for the true/fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order m L + m c_f online operations where c_f denotes the fitting function operation count and, typically, m << L. We generate accurate surrogate models for Effective One Body (EOB) waveforms of non-spinning binary black hole coalescences with durations as long as 10^5 M, mass ratios from 1 to 10, and for multiple harmonic modes. We find that these surrogates are three orders of magnitude faster to evaluate as compared to the cost of generating EOB waveforms in standard ways. Surrogate model building for other waveform models follow the same steps and have the same low online scaling cost. For expensive numerical simulations of binary black hole coalescences we thus anticipate large speedups in generating new waveforms with a surrogate. As waveform generation is one of the dominant costs in parameter estimation algorithms and parameter space exploration, surrogate models offer a new and practical way to dramatically accelerate such studies without impacting accuracy.Comment: 20 pages, 17 figures, uses revtex 4.1. Version 2 includes new numerical experiments for longer waveform durations, larger regions of parameter space and multi-mode model

    The Effects of Dark Matter Decay and Annihilation on the High-Redshift 21 cm Background

    Get PDF
    The radiation background produced by the 21 cm spin-flip transition of neutral hydrogen at high redshifts can be a pristine probe of fundamental physics and cosmology. At z~30-300, the intergalactic medium (IGM) is visible in 21 cm absorption against the cosmic microwave background (CMB), with a strength that depends on the thermal (and ionization) history of the IGM. Here we examine the constraints this background can place on dark matter decay and annihilation, which could heat and ionize the IGM through the production of high-energy particles. Using a simple model for dark matter decay, we show that, if the decay energy is immediately injected into the IGM, the 21 cm background can detect energy injection rates >10^{-24} eV cm^{-3} sec^{-1}. If all the dark matter is subject to decay, this allows us to constrain dark matter lifetimes <10^{27} sec. Such energy injection rates are much smaller than those typically probed by the CMB power spectra. The expected brightness temperature fluctuations at z~50 are a fraction of a mK and can vary from the standard calculation by up to an order of magnitude, although the difference can be significantly smaller if some of the decay products free stream to lower redshifts. For self-annihilating dark matter, the fluctuation amplitude can differ by a factor <2 from the standard calculation at z~50. Note also that, in contrast to the CMB, the 21 cm probe is sensitive to both the ionization fraction and the IGM temperature, in principle allowing better constraints on the decay process and heating history. We also show that strong IGM heating and ionization can lead to an enhanced H_2 abundance, which may affect the earliest generations of stars and galaxies.Comment: submitted to Phys Rev D, 14 pages, 8 figure

    Strategy towards Mirror-fermion Signatures

    Get PDF
    The existence of mirror fermions interacting strongly under a new gauge group and having masses near the electroweak scale has been recently proposed as a viable alternative to the standard-model Higgs mechanism. The main purpose of this work is to investigate which specific experimental signals are needed to clearly differentiate the mirror-fermion model from other new-physics models. In particular, the case is made for a future large lepton collider with c.o.m. energies of roughly 4 TeV or higher.Comment: 30 Latex pages, 2 postscript figure

    Technical Note: A numerical test-bed for detailed ice nucleation studies in the AIDA cloud simulation chamber

    Get PDF
    The AIDA (Aerosol Interactions and Dynamics in the Atmosphere) aerosol and cloud chamber of Forschungszentrum Karlsruhe can be used to test the ice forming ability of aerosols. The AIDA chamber is extensively instrumented including pressure, temperature and humidity sensors, and optical particle counters. Expansion cooling using mechanical pumps leads to ice supersaturation conditions and possible ice formation. In order to describe the evolving chamber conditions during an expansion, a parcel model was modified to account for diabatic heat and moisture interactions with the chamber walls. Model results are shown for a series of expansions where the initial chamber temperature ranged from &minus;20&deg;C to &minus;60&deg;C and which used desert dust as ice forming nuclei. During each expansion, the initial formation of ice particles was clearly observed. For the colder expansions there were two clear ice nucleation episodes. <br><br> In order to test the ability of the model to represent the changing chamber conditions and to give confidence in the observations of chamber temperature and humidity, and ice particle concentration and mean size, ice particles were simply added as a function of time so as to reproduce the observations of ice crystal concentration. The time interval and chamber conditions over which ice nucleation occurs is therefore accurately known, and enables the model to be used as a test bed for different representations of ice formation
    • …
    corecore