373 research outputs found

    The cosmological constant and the relaxed universe

    Full text link
    We study the role of the cosmological constant (CC) as a component of dark energy (DE). It is argued that the cosmological term is in general unavoidable and it should not be ignored even when dynamical DE sources are considered. From the theoretical point of view quantum zero-point energy and phase transitions suggest a CC of large magnitude in contrast to its tiny observed value. Simply relieving this disaccord with a counterterm requires extreme fine-tuning which is referred to as the old CC problem. To avoid it, we discuss some recent approaches for neutralising a large CC dynamically without adding a fine-tuned counterterm. This can be realised by an effective DE component which relaxes the cosmic expansion by counteracting the effect of the large CC. Alternatively, a CC filter is constructed by modifying gravity to make it insensitive to vacuum energy.Comment: 6 pages, no figures, based on a talk presented at PASCOS 201

    The extranasal glioma - a cause of neonatal respiratory distress

    Get PDF
    Normal brain tissue in an abnonnal configuration, occurring at a site away from the cranial cavity, is temied an Extranasal Glioma. Since its first description in 1852 by Reid2, only 140 patients with this entity have been reported. Heterotopic brain tissue when present in a confined area may cause obstmction, pressure and pain. Its occurrence in the nasopharyngeal region is relatively rare and only 19 cases have been reported so far2, the majority presenting with respiratory distress in the early neonatal period. The objective of this report is to create an awareness amongst the physicians of the presentation and management of this condition

    Constraints on the origin of the first light from SN2014J

    Get PDF
    We study the very early lightcurve of supernova 2014J (SN 2014J) using the high-cadence broad-band imaging data obtained by the Kilodegree Extremely Little Telescope (KELT), which fortuitously observed M 82 around the time of the explosion, starting more than two months prior to detection, with up to 20 observations per night. These observations are complemented by observations in two narrow-band filters used in an Hα\alpha survey of nearby galaxies by the intermediate Palomar Transient Factory (iPTF) that also captured the first days of the brightening of the \sn. The evolution of the lightcurves is consistent with the expected signal from the cooling of shock heated material of large scale dimensions, \gsim 1 R_{\odot}. This could be due to heated material of the progenitor, a companion star or pre-existing circumstellar environment, e.g., in the form of an accretion disk. Structure seen in the lightcurves during the first days after explosion could also originate from radioactive material in the outer parts of an exploding white dwarf, as suggested from the early detection of gamma-rays. The model degeneracy translates into a systematic uncertainty of ±0.3\pm 0.3 days on the estimate of the first light from SN 2014J.Comment: Accepted by ApJ. Companion paper by Siverd et al, arXiv:1411.415

    The peculiar Type Ia supernova iPTF14atg: Chandrasekhar-mass explosion or violent merger?

    Get PDF
    iPTF14atg, a subluminous peculiar Type Ia supernova (SN Ia) similar to SN 2002es, is the first SN Ia for which a strong UV flash was observed in the early-time light curves. This has been interpreted as evidence for a single-degenerate (SD) progenitor system where such a signal is expected from interactions between the SN ejecta and the non-degenerate companion star. Here, we compare synthetic observables of multi-dimensional state-of-the-art explosion models for different progenitor scenarios to the light curves and spectra of iPTF14atg. From our models, we have difficulties explaining the spectral evolution of iPTF14atg within the SD progenitor channel. In contrast, we find that a violent merger of two carbon-oxygen white dwarfs with 0.9 and 0.76 solar masses, respectively, provides an excellent match to the spectral evolution of iPTF14atg from 10d before to several weeks after maximum light. Our merger model does not naturally explain the initial UV flash of iPTF14atg. We discuss several possibilities like interactions of the SN ejecta with the circum-stellar medium and surface radioactivity from a He ignited merger that may be able to account for the early UV emission in violent merger models.Comment: 12 pages, 7 figures, accepted for publication in MNRA

    Mass-varying neutrino in light of cosmic microwave background and weak lensing

    Full text link
    We aim to constrain mass-varying neutrino models using large scale structure observations and produce forecast for the Euclid survey. We investigate two models with different scalar field potential and both positive and negative coupling parameters \beta. These parameters correspond to growing or decreasing neutrino mass, respectively. We explore couplings up to |\beta|<5. In the case of the exponential potential, we find an upper limit on Ωνh2\Omega_\nu h^2<0.004 at 2-σ\sigma level. In the case of the inverse power law potential the null coupling can be excluded with more than 2-\sigma significance; the limits on the coupling are \beta>3 for the growing neutrino mass and \beta<-1.5 for the decreasing mass case. This is a clear sign for a preference of higher couplings. When including a prior on the present neutrino mass the upper limit on the coupling becomes |\beta|<3 at 2-σ\sigma level for the exponential potential. Finally, we present a Fisher forecast using the tomographic weak lensing from an Euclid-like experiment and we also consider the combination with the cosmic microwave background (CMB) temperature and polarisation spectra from a Planck-like mission. If considered alone, lensing data is more efficient in constraining Ων\Omega_\nu with respect to CMB data alone. There is, however, a strong degeneracy in the \beta-Ωνh2\Omega_\nu h^2 plane. When the two data sets are combined, the latter degeneracy remains, but the errors are reduced by a factor ~2 for both parameters.Comment: 5 pages, 6 figures. Now published in A&A 500, 657-665 (2009

    Cost-effectiveness of household contact investigation for detection of tuberculosis in Pakistan

    Get PDF
    Objectives Despite WHO guidelines recommending household contact investigation, and studies showing the impact of active screening, most tuberculosis (TB) programmes in resource-limited settings only carry out passive contact investigation. The cost of such strategies is often cited as barriers to their implementation. However, little data are available for the additional costs required to implement this strategy. We aimed to estimate the cost and cost-effectiveness of active contact investigation as compared with passive contact investigation in urban Pakistan. Methods We estimated the cost-effectiveness of ‘enhanced’ (passive with follow-up) and ‘active’ (household visit) contact investigations compared with standard ‘passive’ contact investigation from providers and the programme’s perspective using a simple decision tree. Costs were collected in Pakistan from a TB clinic performing passive contact investigation and from studies of active contact tracing interventions conducted. The effectiveness was based on the number of patients with TB identified among household contacts screened. Results The addition of enhanced contact investigation to the existing passive mode detected 3.8 times more cases of TB per index patient compared with passive contact investigation alone. The incremental cost was US30perindexpatient,whichyieldedanincrementalcostofUS30 per index patient, which yielded an incremental cost of US120 per incremental patient identified with TB. The active contact investigation was 1.5 times more effective than enhanced contact investigation with an incremental cost of US$238 per incremental patient with TB identified. Conclusion Our results show that enhanced and active approaches to contact investigation effectively identify additional patients with TB among household contacts at a relatively modest cost. These strategies can be added to the passive contact investigation in a high burden setting to find the people with TB who are missed and meet the End TB strategy goals.publishedVersio

    Carnegie Hubble Program: A Mid-Infrared Calibration of the Hubble Constant

    Get PDF
    Using a mid-infrared calibration of the Cepheid distance scale based on recent observations at 3.6 um with the Spitzer Space Telescope, we have obtained a new, high-accuracy calibration of the Hubble constant. We have established the mid-IR zero point of the Leavitt Law (the Cepheid Period-Luminosity relation) using time-averaged 3.6 um data for ten high-metallicity, Milky Way Cepheids having independently-measured trigonometric parallaxes. We have adopted the slope of the PL relation using time-averaged 3.6 um data for 80 long-period Large Magellanic Cloud (LMC) Cepheids falling in the period range 0.8 < log(P) < 1.8. We find a new reddening-corrected distance to the LMC of 18.477 +/- 0.033 (systematic) mag. We re-examine the systematic uncertainties in H0, also taking into account new data over the past decade. In combination with the new Spitzer calibration, the systematic uncertainty in H0 over that obtained by the Hubble Space Telescope (HST) Key Project has decreased by over a factor of three. Applying the Spitzer calibration to the Key Project sample, we find a value of H0 = 74.3 with a systematic uncertainty of +/-2.1 (systematic) km/s/Mpc, corresponding to a 2.8% systematic uncertainty in the Hubble constant. This result, in combination with WMAP7 measurements of the cosmic microwave background anisotropies and assuming a flat universe, yields a value of the equation of state for dark energy, w0 = -1.09 +/- 0.10. Alternatively, relaxing the constraints on flatness and the numbers of relativistic species, and combining our results with those of WMAP7, Type Ia supernovae and baryon acoustic oscillations yields w0 = -1.08 +/- 0.10 and a value of N_eff = 4.13 +/- 0.67, mildly consistent with the existence of a fourth neutrino species.Comment: 27 pages, 8 figures, Accepted for publication in Ap

    Measuring the cosmological bulk flow using the peculiar velocities of supernovae

    Full text link
    We study large-scale coherent motion in our universe using the existing Type IA supernovae data. If the recently observed bulk flow is real, then some imprint must be left on supernovae motion. We run a series of Monte Carlo Markov Chain runs in various redshift bins and find a sharp contrast between the z 0.05 data. The$z < 0.05 data are consistent with the bulk flow in the direction (l,b)=({290^{+39}_{-31}}^{\circ}, {20^{+32}_{-32}}^{\circ}) with a magnitude of v_bulk = 188^{+119}_{-103} km/s at 68% confidence. The significance of detection (compared to the null hypothesis) is 95%. In contrast, z > 0.05 data (which contains 425 of the 557 supernovae in the Union2 data set) show no evidence for bulk flow. While the direction of the bulk flow agrees very well with previous studies, the magnitude is significantly smaller. For example, the Kashlinsky, et al.'s original bulk flow result of v_bulk > 600 km/s is inconsistent with our analysis at greater than 99.7% confidence level. Furthermore, our best-fit bulk flow velocity is consistent with the expectation for the \Lambda CDM model, which lies inside the 68% confidence limit.Comment: Version published in JCA

    Unification of Dark Matter and Dark Energy in a Modified Entropic Force Model

    Full text link
    In Verlinde's entropic force scenario of gravity, Newton's laws and Einstein equations can be obtained from the first pinciples and general assumptions. However, the equipartition law of energy is invalid at very low temperatures. We show clearly that the threshold of the equipartition law of energy is related with horizon of the universe. Thus, a one-dimension Debye (ODD) model in the direction of radius of the modified entropic force (MEF) maybe suitable in description of the accelerated expanding universe. We present a Friedmann cosmic dynamical model in the ODD-MEF framework. We examine carefully constraints on the ODD-MEF model from the Union2 compilation of the Supernova Cosmology Project (SCP) collaboration, the data from the observation of the large-scale structure (LSS) and the cosmic microwave background (CMB), i.e. SNe Ia+LSS+CMB. The combined numerical analysis gives the best-fit value of the model parameters ζ109\zeta\simeq10^{-9} and Ωm0=0.224\Omega_{m0}=0.224, with χmin2=591.156\chi_{min}^2=591.156. The corresponding age of the universe agrees with the result of D. Spergel {\it et al.}\cite{Spergel2003} at 95% confidence level. The numerical result also yields an accelerated expanding universe without invoking any kind of dark energy. Taking ζ(2πωD/H0)\zeta(\equiv 2\pi \omega_D/H_0) as a running parameter associated with the structure scale rr, we obtain a possible unified scenario of the asymptotic flatness of the radial velocity dispersion of spiral galaxies, the accelerated expanding universe and the Pioneer 10/11 anomaly in the entropic force framework of Verlinde.Comment: 23 pages, 6 figure
    corecore