64 research outputs found

    Brane World Cosmologies and Statistical Properties of Gravitational Lenses

    Full text link
    Brane world cosmologies seem to provide an alternative explanation for the present accelerated stage of the Universe with no need to invoke either a cosmological constant or an exotic \emph{quintessence} component. In this paper we investigate statistical properties of gravitational lenses for some particular scenarios based on this large scale modification of gravity. We show that a large class of such models are compatible with the current lensing data for values of the matter density parameter Ωm≀0.94\Omega_{\rm{m}} \leq 0.94 (1σ1\sigma). If one fixes Ωm\Omega_{\rm{m}} to be ≃0.3\simeq 0.3, as suggested by most of the dynamical estimates of the quantity of matter in the Universe, the predicted number of lensed quasars requires a slightly open universe with a crossover distance between the 4 and 5-dimensional gravities of the order of 1.76Ho−11.76 H_o^{-1}.Comment: 6 pages, 3 figures, revte

    Computational advances in gravitational microlensing: a comparison of CPU, GPU, and parallel, large data codes

    Full text link
    To assess how future progress in gravitational microlensing computation at high optical depth will rely on both hardware and software solutions, we compare a direct inverse ray-shooting code implemented on a graphics processing unit (GPU) with both a widely-used hierarchical tree code on a single-core CPU, and a recent implementation of a parallel tree code suitable for a CPU-based cluster supercomputer. We examine the accuracy of the tree codes through comparison with a direct code over a much wider range of parameter space than has been feasible before. We demonstrate that all three codes present comparable accuracy, and choice of approach depends on considerations relating to the scale and nature of the microlensing problem under investigation. On current hardware, there is little difference in the processing speed of the single-core CPU tree code and the GPU direct code, however the recent plateau in single-core CPU speeds means the existing tree code is no longer able to take advantage of Moore's law-like increases in processing speed. Instead, we anticipate a rapid increase in GPU capabilities in the next few years, which is advantageous to the direct code. We suggest that progress in other areas of astrophysical computation may benefit from a transition to GPUs through the use of "brute force" algorithms, rather than attempting to port the current best solution directly to a GPU language -- for certain classes of problems, the simple implementation on GPUs may already be no worse than an optimised single-core CPU version.Comment: 11 pages, 4 figures, accepted for publication in New Astronom

    Calculating exclusion limits for Weakly Interacting Massive Particle direct detection experiments without background subtraction

    Get PDF
    Competitive limits on the weakly interacting massive particle (WIMP) spin-independent scattering cross section are currently being produced by 76Ge detectors originally designed to search for neutrinoless double beta decay, such as the Heidelberg-Moscow and IGEX experiments. In the absence of background subtraction, limits on the WIMP interaction cross section are set by calculating the upper confidence limit on the theoretical event rate, given the observed event rate. The standard analysis technique involves calculating the 90% upper confidence limit on the number of events in each bin, and excluding any set of parameters (WIMP mass and cross-section) which produces a theoretical event rate for any bin which exceeds the 90% upper confidence limit on the event rate for that bin. We show that, if there is more than one energy bin, this produces exclusion limits that are actually at a lower degree of confidence than 90%, and are hence erroneously tight. We formulate criteria which produce true 90% confidence exclusion limits in these circumstances, including calculating the individual bin confidence limit for which the overall probability that no bins exceeds this confidence limit is 90% and calculating the 90% minimum confidence limit on the number of bins which exceed their individual bin 90% confidence limits. We then compare the limits on the WIMP cross-section produced by these criteria with those found using the standard technique, using data from the Heidelberg-Moscow and IGEX experiments.Comment: 6 pages, 3 figures, 3 tables, shortened version to appear in Phys. Rev. D, contents otherwise unchange

    Toward an internally consistent astronomical distance scale

    Full text link
    Accurate astronomical distance determination is crucial for all fields in astrophysics, from Galactic to cosmological scales. Despite, or perhaps because of, significant efforts to determine accurate distances, using a wide range of methods, tracers, and techniques, an internally consistent astronomical distance framework has not yet been established. We review current efforts to homogenize the Local Group's distance framework, with particular emphasis on the potential of RR Lyrae stars as distance indicators, and attempt to extend this in an internally consistent manner to cosmological distances. Calibration based on Type Ia supernovae and distance determinations based on gravitational lensing represent particularly promising approaches. We provide a positive outlook to improvements to the status quo expected from future surveys, missions, and facilities. Astronomical distance determination has clearly reached maturity and near-consistency.Comment: Review article, 59 pages (4 figures); Space Science Reviews, in press (chapter 8 of a special collection resulting from the May 2016 ISSI-BJ workshop on Astronomical Distance Determination in the Space Age

    Measuring the metric: a parametrized post-Friedmanian approach to the cosmic dark energy problem

    Get PDF
    We argue for a ``parametrized post-Friedmanian'' approach to linear cosmology, where the history of expansion and perturbation growth is measured without assuming that the Einstein Field Equations hold. As an illustration, a model-independent analysis of 92 type Ia supernovae demonstrates that the curve giving the expansion history has the wrong shape to be explained without some form of dark energy or modified gravity. We discuss how upcoming lensing, galaxy clustering, cosmic microwave background and Lyman alpha forest observations can be combined to pursue this program, which generalizes the quest for a dark energy equation of state, and forecast the accuracy that the proposed SNAP satellite can attain.Comment: Replaced to match accepted PRD version. References and another example added, section III omitted since superceded by astro-ph/0207047. 11 PRD pages, 7 figs. Color figs and links at http://www.hep.upenn.edu/~max/gravity.html or from [email protected]

    Space Telescope and Optical Reverberation Mapping Project. VII. Understanding the Ultraviolet Anomaly in NGC 5548 with X-Ray Spectroscopy

    Get PDF
    During the Space Telescope and Optical Reverberation Mapping Project observations of NGC 5548, the continuum and emission-line variability became decorrelated during the second half of the six-month-long observing campaign. Here we present Swift and Chandra X-ray spectra of NGC 5548 obtained as part of the campaign. The Swift spectra show that excess flux (relative to a power-law continuum) in the soft X-ray band appears before the start of the anomalous emission-line behavior, peaks during the period of the anomaly, and then declines. This is a model-independent result suggesting that the soft excess is related to the anomaly. We divide the Swift data into on- and off-anomaly spectra to characterize the soft excess via spectral fitting. The cause of the spectral differences is likely due to a change in the intrinsic spectrum rather than to variable obscuration or partial covering. The Chandra spectra have lower signal-to-noise ratios, but are consistent with the Swift data. Our preferred model of the soft excess is emission from an optically thick, warm Comptonizing corona, the effective optical depth of which increases during the anomaly. This model simultaneously explains all three observations: the UV emission-line flux decrease, the soft-excess increase, and the emission-line anomaly

    UBVRI Light curves of 44 Type Ia supernovae

    Get PDF
    We present UBVRI photometry of 44 Type la supernovae (SNe la) observed from 1997 to 2001 as part of a continuing monitoring campaign at the Fred Lawrence Whipple Observatory of the Harvard-Smithsonian Center for Astrophysics. The data set comprises 2190 observations and is the largest homogeneously observed and reduced sample of SNe la to date, nearly doubling the number of well-observed, nearby SNe la with published multicolor CCD light curves. The large sample of [U-band photometry is a unique addition, with important connections to SNe la observed at high redshift. The decline rate of SN la U-band light curves correlates well with the decline rate in other bands, as does the U - B color at maximum light. However, the U-band peak magnitudes show an increased dispersion relative to other bands even after accounting for extinction and decline rate, amounting to an additional ∌40% intrinsic scatter compared to the B band

    Constraints on dark matter-nucleon effective couplings in the presence of kinematically distinct halo substructures using the DEAP-3600 detector

    Get PDF
    DEAP-3600 is a single-phase liquid argon detector aiming to directly detect weakly interacting massive particles (WIMPs), located at SNOLAB (Sudbury, Canada). After analyzing data taken during the first year of operation, a null result was used to place an upper bound on the WIMP-nucleon, spin-independent, isoscalar cross section. This study reinterprets this result within a nonrelativistic effective field theory framework and further examines how various possible substructures in the local dark matter halo may affect these constraints. Such substructures are hinted at by kinematic structures in the local stellar distribution observed by the Gaia satellite and other recent astronomical surveys. These include the Gaia Sausage (or Enceladus), as well as a number of distinct streams identified in recent studies. Limits are presented for the coupling strength of the effective contact interaction operators O1, O3, O5, O8, and O11, considering isoscalar, isovector, and xenonphobic scenarios, as well as the specific operators corresponding to millicharge, magnetic dipole, electric dipole, and anapole interactions. The effects of halo substructures on each of these operators are explored as well, showing that the O5 and O8 operators are particularly sensitive to the velocity distribution, even at dark matter masses above 100 GeV=c

    Latest results of dark matter detection with the DarkSide experiment

    Get PDF
    In this contribution the latest results of dark matter direct detection obtained by the DarkSide Collaboration are discussed. New limits on the scattering cross-section between dark matter particles and baryonic matter have been set. The results have been reached using the DarkSide-50 detector, a double-phase Time Projection Chamber (TPC) filled with 40Ar and installed at Laboratori Nazionali del Gran Sasso (LNGS). In 2018, the DarkSide Collaboration has performed three different types of analysis. The so-called high-mass analysis into the range between ∌ 10 GeV and ∌ 1000 GeV is discussed under the hypothesis of scattering between dark matter and Ar nuclei. The low-mass analysis, performed using the same hypothesis, extends the limit down to ∌1.8 GeV. Through a different hypothesis, that predicts dark matter scattering off the electrons inside of the Ar atom, it has been possible to set limits for sub-GeV dark matter masses

    Dimensions of Early Identification

    Full text link
    Several dimensions of early identification are discussed, including the relationship between early identification and prevention. A preventive component is described for the various forms of early identification—child find, screening, assessment, and program planning. Also discussed are recently published guidelines for screening and assessment and the assumptions on which these guidelines are based. Chief among these assumptions is the notion that risk and disability are multidetermined; hence, systems of early identification must similarly be founded on a multiple risk model. The implications of this model for selecting assessment instruments and for determining eligibility are described, as are future directions that should be explored in early identification.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/67883/2/10.1177_105381519101500105.pd
    • 

    corecore