4,795 research outputs found

    A loophole to the universal photon spectrum in electromagnetic cascades: application to the "cosmological lithium problem"

    Get PDF
    The standard theory of electromagnetic cascades onto a photon background predicts a quasi-universal shape for the resulting non-thermal photon spectrum. This has been applied to very disparate fields, including non-thermal big bang nucleosynthesis (BBN). However, once the energy of the injected photons falls below the pair-production threshold the spectral shape is very different, a fact that has been overlooked in past literature. This loophole may have important phenomenological consequences, since it generically alters the BBN bounds on non-thermal relics: for instance it allows to re-open the possibility of purely electromagnetic solutions to the so-called "cosmological lithium problem", which were thought to be excluded by other cosmological constraints. We show this with a proof-of-principle example and a simple particle physics model, compared with previous literature.Comment: 5 pages, 2 figures, typos corrected; matches version published in PRL. (Version 1 of this article was submitted to arxiv on Jan. 8th, kept on hold by arxiv moderators due to unspecified classification doubts for almost one month.

    Dark Matter annihilations in halos and high-redshift sources of reionization of the universe

    Full text link
    It is well known that annihilations in the homogeneous fluid of dark matter (DM) can leave imprints in the cosmic microwave background (CMB) anisotropy power spectrum. However, the relevance of DM annihilations in halos for cosmological observables is still subject to debate, with previous works reaching different conclusions on this point. Also, all previous studies used a single type of parameterization for the astrophysical reionization, and included no astrophysical source for the heating of the intergalactic medium. In this work, we revisit these problems. When standard approaches are adopted, we find that the ionization fraction does exhibit a very particular (and potentially constraining) pattern, but the currently measurable optical depth to reionization is left almost unchanged: In agreement with the most of the previous literature, for plausible halo models we find that the modification of the signal with respect to the one coming from annihilations in the smooth background is tiny, below cosmic variance within currently allowed parameter space. However, if different and probably more realistic treatments of the astrophysical sources of reionization and heating are adopted, a more pronounced effect of the DM annihilation in halos is possible. We thus conclude that within currently adopted baseline models the impact of the virialised DM structures cannot be uncovered by CMB power spectra measurements, but a larger impact is possible if peculiar models are invoked for the redshift evolution of the DM annihilation signal or different assumptions are made for the astrophysical contributions. A better understanding (both theoretical and observational) of the reionization and temperature history of the universe, notably via the 21 cm signal, seems the most promising way for using halo formation as a tool in DM searches, improving over the sensitivity of current cosmological probes.Comment: 30 pages, 11 figures. v2: extended version (notably astrophysical source effects significantly expanded), references added, main conclusions unchanged. Matches version accepted by JCA

    A fresh look at linear cosmological constraints on a decaying dark matter component

    Full text link
    We consider a cosmological model in which a fraction ff of the Dark Matter (DM) is allowed to decay in an invisible relativistic component, and compute the resulting constraints on both the decay width (or inverse lifetime) Γ\Gamma and ff from purely gravitational arguments. We report a full derivation of the Boltzmann hierarchy, correcting a mistake in previous literature, and compute the impact of the decay --as a function of the lifetime-- on the CMB and matter power spectra. From CMB only, we obtain that no more than 3.8 % of the DM could have decayed in the time between recombination and today (all bounds quoted at 95 % CL). We also comment on the important application of this bound to the case where primordial black holes constitute DM, a scenario notoriously difficult to constrain. For lifetimes longer than the age of the Universe, the bounds can be cast as fΓ<6.3×103f\Gamma < 6.3\times10^{-3} Gyr1^{-1}. For the first time, we also checked that degeneracies with massive neutrinos are broken when information from the large scale structure is used. Even secondary effects like CMB lensing suffice to this purpose. Decaying DM models have been invoked to solve a possible tension between low redshift astronomical measurements of σ8\sigma_8 and Ωm\Omega_{\rm m} and the ones inferred by Planck. We reassess this claim finding that with the most recent BAO, HST and σ8\sigma_8 data extracted from the CFHT survey, the tension is only slightly reduced despite the two additional free parameters, loosening the bound to fΓ<15.9×103f\Gamma < 15.9\times10^{-3} Gyr1^{-1}. The bound however improves to fΓ<5.9×103f\Gamma < 5.9\times10^{-3} Gyr1^{-1} if only data consistent with the CMB are included. This highlights the importance of establishing whether the tension is due to real physical effects or unaccounted systematics, for settling the reach of achievable constraints on decaying DM.Comment: 30p, 11 figures, comments welcom

    Comparing French and US hospital technologies: a directional input distance function approach

    No full text
    French and US hospital technologies are compared using directional input distance functions. The aggregation properties of the directional distance function allow comparison of hospital industry-level performance as well as standard firm-level performance with regard to productive efficiency. In addition, the underlying constituents of efficiency - in the short run, congestion and technical inefficiency, and in the long run, scale inefficiency - are analysed by decomposing the overall measure. By virtue of using the directional distance function, it is also possible to obtain an estimate of a lower bound on allocative inefficiency. It is found that French and US hospitals use quite different technologies. Long run scale inefficiencies cause most of the French hospitals' inefficiency, while short run technical inefficiency is the main source of overall productive inefficiency in the US hospitals

    The Size and Service Offering Efficiencies of U.S. Hospitals.

    Get PDF
    Hospital productivity has been a research topic for over two decades. We expand on this research to include measures of dis/economies of scope. By using the Free Coordination Hull (FCH) we are able to determine if hospitals in our sample can become more efficient if they provide more services (diseconomies of scope) or if two smaller hospitals with a reallocation of resources could become more efficient (economies of scope). Using data from the American Hospital Association for the years 2004-2007, we found variations among hospital markets (measured by the Core Based Statistical Area). We can determine whether dis/economies of scope exist by comparing the results from two linear programming problems. Focusing on four markets: Los Angeles, Philadelphia, Madison, WI, and New Orleans we found variations in how best these hospitals operating in these markets could change in order to increase both scale and scope efficiencies. This approach could be used by policy makers and managers in order to reduce costs by sharing, reducing, or expanding services in hospitals. Findings from a study such as this should aid reform programs by providing more information on the sources of hospital inefficiency.Hospital, Efficiency, Economies of Scope, Hospital Markets

    An Investigation of the Construct Validity of the Big Five Construct of Emotional Stability in Relation to Job Performance, Job Satisfaction, and Career Satisfaction

    Get PDF
    The present study examined the Big Five dimension of Emotional Stability and explored its relationship to work outcomes. Six archival data sets were used. Pearson correlation coefficients were calculated between the Big Five dimensions of personality and job performance, job satisfaction, and career satisfaction. Results demonstrated that all Big Five personality dimensions were significantly, positively related to job performance, job satisfaction, and career satisfaction. Additionally, part correlations between Emotional Stability and job performance, job satisfaction, and career satisfaction were calculated controlling for the other Big Five dimensions of Extraversion, Openness, Conscientiousness, and Agreeableness. Emotional Stability demonstrated unique variance, continuing to have a significant, positive correlation with all criteria. In order to examine how Emotional Stability is related to job performance, job satisfaction, and career satisfaction in jobs with varying stress levels, data sets were sorted by job categories and Spearman Rank Order Correlations were calculated between job stress measures and Emotional Stability-Criteria correlations. No significant results were found. Emotional Stability mean scores were also compared for job categories using one-way ANOVA and independent groups t-tests. Individuals in jobs that were considered “high stress” had higher mean scores on Emotional Stability. In addition to supporting previous research findings, this study contributed unique information by demonstrating that Emotional Stability contributes unique information to the prediction of job outcomes

    AMS-02 antiprotons, at last! Secondary astrophysical component and immediate implications for Dark Matter

    Full text link
    Using the updated proton and helium fluxes just released by the AMS-02 experiment we reevaluate the secondary astrophysical antiproton to proton ratio and its uncertainties, and compare it with the ratio preliminarly reported by AMS-02. We find no unambiguous evidence for a significant excess with respect to expectations. Yet, some preference for a flatter energy dependence of the diffusion coefficient starts to emerge. Also, we provide a first assessment of the room left for exotic components such as Galactic Dark Matter annihilation or decay, deriving new stringent constraints.Comment: 12 pages, 5 figures; Comments and clarifications added (including an appendix), matches version published on JCA

    Testing a double AGN hypothesis for Mrk 273

    Get PDF
    The ULIRG Mrk 273 contains two infrared nuclei, N and SW, separated by 1 arcsec. A Chandra observation has identified the SW nucleus as an absorbed X-ray source with nH ~4e23 cm-2 but also hinted at the possible presence of a Compton thick AGN in the N nucleus, where a black hole of 10^9 Msun is inferred from the ionized gas kinematics. The intrinsic X-ray spectral slope recently measured by NuSTAR is unusually hard (photon index of ~1.3) for a Seyfert nucleus, for which we seek an alternative explanation. We hypothesise a strongly absorbed X-ray source in N, of which X-ray emission rises steeply above 10 keV, in addition to the known X-ray source in SW, and test it against the NuSTAR data, assuming the standard spectral slope (photon index of 1.9). This double X-ray source model gives a good explanation of the hard continuum spectrum, the deep Fe K absorption edge, and the strong Fe K line observed in this ULIRG, without invoking the unusual spectral slope required for a single source interpretation. The putative X-ray source in N is found to be absorbed by nH = 1.4(+0.7/-0.4)e24 cm-2. The estimated 2-10 keV luminosity of the N source is 1.3e43 erg/s, about a factor of 2 larger than that of SW during the NuSTAR observation. Uncorrelated variability above and below 10 keV between the Suzaku and NuSTAR observations appears to support the double source interpretation. Variability in spectral hardness and Fe K line flux between the previous X-ray observations is also consistent with this picture.Comment: 6 pages, 5 figures, Accepted for publication in A&

    Genomic and Transcriptomic Alterations Associated with STAT3 Activation in Head and Neck Cancer.

    Get PDF
    BackgroundHyperactivation of STAT3 via constitutive phosphorylation of tyrosine 705 (Y705) is common in most human cancers, including head and neck squamous carcinoma (HNSCC). STAT3 is rarely mutated in cancer and the (epi)genetic alterations that lead to STAT3 activation are incompletely understood. Here we used an unbiased approach to identify genomic and epigenomic changes associated with pSTAT3(Y705) expression using data generated by The Cancer Genome Atlas (TCGA).Methods and findingsMutation, mRNA expression, promoter methylation, and copy number alteration data were extracted from TCGA and examined in the context of pSTAT3(Y705) protein expression. mRNA expression levels of 1279 genes were found to be associated with pSTAT3(705) expression. Association of pSTAT3(Y705) expression with caspase-8 mRNA expression was validated by immunoblot analysis in HNSCC cells. Mutation, promoter hypermethylation, and copy number alteration of any gene were not significantly associated with increased pSTAT3(Y705) protein expression.ConclusionsThese cumulative results suggest that unbiased approaches may be useful in identifying the molecular underpinnings of oncogenic signaling, including STAT3 activation, in HNSCC. Larger datasets will likely be necessary to elucidate signaling consequences of infrequent alterations
    corecore