2,831 research outputs found

    Radiation can never again dominate Matter in a Vacuum Dominated Universe

    Full text link
    We demonstrate that in a vacuum-energy-dominated expansion phase, surprisingly neither the decay of matter nor matter-antimatter annihilation into relativistic particles can ever cause radiation to once again dominate over matter in the future history of the universe.Comment: updated version, as it will appear in Phys. Rev D. Title change, and some other minor alteration

    Predicting Big Bang Deuterium

    Get PDF
    We present new upper and lower bounds to the primordial abundances of deuterium and helium-3 based on observational data from the solar system and the interstellar medium. Independent of any model for the primordial production of the elements we find (at the 95\% C.L.): 1.5×105(D/H)P10.0×1051.5 \times 10^{-5} \le (D/H)_P \le 10.0 \times 10^{-5} and (3He/H)P2.6×105(^3He/H)_P \le 2.6\times 10^{-5}. When combined with the predictions of standard big bang nucleosynthesis, these constraints lead to a 95\% C.L. bound on the primordial abundance of deuterium: (D/H)best=(3.51.8+2.7)×105(D/H)_{best} = (3.5^{+2.7}_{-1.8})\times 10^{-5}. Measurements of deuterium absorption in the spectra of high redshift QSOs will directly test this prediction. The implications of this prediction for the primordial abundances of helium-4 and lithium-7 are discussed, as well as those for the universal density of baryons.Comment: Revised version of paper to reflect comments of the referee and reply to suggestions of Copi, Schramm, and Turner regarding the overall analysis and treatment of chemical evolution of D and He-3. Best-fit D/H abundance changes from (2.3 + 3.0 - 1.0)x10^{-5} to (3.5 +2.7 - 1.8) x10^{-5}. See also hep-ph/950531

    Comprehensive maximum likelihood estimation of diffusion compartment models towards reliable mapping of brain microstructure

    Get PDF
    open4siDiffusion MRI is a key in-vivo non invasive imaging capability that can probe the microstructure of the brain. However,its limited resolution requires complex voxelwise generative models of the diffusion. Diffusion Compartment (DC) models divide the voxel into smaller compartments in which diffusion is homogeneous. We present a comprehensive framework for maximum likelihood estimation (MLE) of such models that jointly features ML estimators of (i) the baseline MR signal,(ii) the noise variance,(iii) compartment proportions,and (iv) diffusion-related parameters. ML estimators are key to providing reliable mapping of brain microstructure as they are asymptotically unbiased and of minimal variance. We compare our algorithm (which efficiently exploits analytical properties of MLE) to alternative implementations and a state-of-theart strategy. Simulation results show that our approach offers the best reduction in computational burden while guaranteeing convergence of numerical estimators to the MLE. In-vivo results also reveal remarkably reliable microstructure mapping in areas as complex as the centrum semiovale. Our ML framework accommodates any DC model and is available freely for multi-tensor models as part of the ANIMA software (https://github.com/Inria-Visages/Anima-Public/wiki).Stamm, Aymeric; Commowick, Olivier; Warfield, Simon K.; Vantini, SimoneStamm, Aymeric; Commowick, Olivier; Warfield, Simon K.; Vantini, Simon

    Dirac Fields in Loop Quantum Gravity and Big Bang Nucleosynthesis

    Full text link
    Big Bang nucleosynthesis requires a fine balance between equations of state for photons and relativistic fermions. Several corrections to equation of state parameters arise from classical and quantum physics, which are derived here from a canonical perspective. In particular, loop quantum gravity allows one to compute quantum gravity corrections for Maxwell and Dirac fields. Although the classical actions are very different, quantum corrections to the equation of state are remarkably similar. To lowest order, these corrections take the form of an overall expansion-dependent multiplicative factor in the total density. We use these results, along with the predictions of Big Bang nucleosynthesis, to place bounds on these corrections.Comment: 15 pages, 2 figures; v2: new discussion of relevance of quantum gravity corrections (Sec. II) and new estimates (Sec. V

    Comparison of the mean photospheric magnetic field and the interplanetary magnetic field

    Get PDF
    Polarity comparison of solar magnetic field and interplanetary magnetic fiel

    Post-hoc derivation of SOHO Michelson doppler imager flat fields

    Get PDF
    <p><b>Context:</b> The SOHO satellite now offers a unique perspective on the Sun as it is the only space-based instrument that can provide large, high-resolution data sets over an entire 11-year solar cycle. This unique property enables detailed studies of long-term variations in the Sun. One significant problem when looking for such changes is determining what component of any variation is due to deterioration of the instrument and what is due to the Sun itself. One of the key parameters that changes over time is the apparent sensitivity of individual pixels in the CCD array. This can change considerably as a result of optics damage, radiation damage, and aging of the sensor itself. In addition to reducing the sensitivity of the telescope over time, this damage significantly changes the uniformity of the flat field of the instrument, a property that is very hard to recalibrate in space. For procedures such as feature tracking and intensity analysis, this can cause significant errors.</p> <p><b>Aims:</b> We present a method for deriving high-precision flat fields for high-resolution MDI continuum data, using analysis of existing continuum and magnetogram data sets.</p> <p><b>Methods:</b> A flat field is constructed using a large set (1000-4000 frames) of cospatial magnetogram and continuum data. The magnetogram data is used to identify and mask out magnetically active regions on the continuum data, allowing systematic biases to be avoided. This flat field can then be used to correct individual continuum images from a similar time.</p> <p><b>Results:</b> This method allows us to reduce the residual flat field error by around a factor 6-30, depending on the area considered, enough to significantly change the results from correlation-tracking analysis. One significant advantage of this method is that it can be done retrospectively using archived data, without requiring any special satellite operations.</p&gt

    How much should one sample to accurately predict the distribution of species assemblages? A virtual community approach

    Get PDF
    Correlative species distribution models (SDMs) are widely used to predict species distributions and assemblages, with many fundamental and applied uses. Different factors were shown to affect SDM prediction accuracy. However, real data cannot give unambiguous answers on these issues, and for this reason, artificial data have been increasingly used in recent years. Here, we move one step further by assessing how different factors can affect the prediction accuracy of virtual assemblages obtained by stacking individual SDM predictions (stacked SDMs, S-SDM). We modeled 100 virtual species in a real study area, testing five different factors: sample size (200-800-3200), sampling method (nested, non-nested), sampling prevalence (25%, 50%, 75% and species true prevalence), modelling technique (GAM, GLM, BRT and RF) and thresholding method (ROC, MaxTSS, and MaxKappa). We showed that the accuracy of S-SDM predictions is mostly affected by modelling technique followed by sample size. Models fitted by GAM/GLM had a higher accuracy and lower variance than BRT/RF. Model accuracy increased with sample size and a sampling strategy reflecting the true prevalence of the species was most successful. However, even with sample sizes as high as >3000 sites, residual uncertainty remained in the predictions, potentially reflecting a bias introduced by creating and/or resampling the virtual species. Therefore, when evaluating the accuracy of predictions from S-SDMs fitted with real field data, one can hardly expect reaching perfect accuracy, and reasonably high values of similarity or predictive success can already be seen as valuable predictions. We recommend the use of a ‘plot-like’ sampling method (best approximation of the species’ true prevalence) and not simply increasing the number of presences-absences of species. As presented here, virtual simulations might be used more systematically in future studies to inform about the best accuracy level that one could expect given the characteristics of the data and the methods used to fit and stack SDMs
    corecore