356 research outputs found

    Merger Histories in Warm Dark Matter Structure Formation Scenario

    Full text link
    Observations on galactic scales seem to be in contradiction with recent high resolution N-body simulations. This so-called cold dark matter (CDM) crisis has been addressed in several ways, ranging from a change in fundamental physics by introducing self-interacting cold dark matter particles to a tuning of complex astrophysical processes such as global and/or local feedback. All these efforts attempt to soften density profiles and reduce the abundance of satellites in simulated galaxy halos. In this paper, we explore a somewhat different approach which consists of filtering the dark matter power spectrum on small scales, thereby altering the formation history of low mass objects. The physical motivation for damping these fluctuations lies in the possibility that the dark matter particles have a different nature i.e. are warm (WDM) rather than cold. We show that this leads to some interesting new results in terms of the merger history and large-scale distribution of low mass halos, as compared to the standard CDM scenario. However, WDM does not appear to be the ultimate solution, in the sense that it is not able to fully solve the CDM crisis, even though one of the main drawbacks, namely the abundance of satellites, can be remedied. Indeed, the cuspiness of the halo profiles still persists, at all redshifts, and for all halos and sub-halos that we investigated. Despite the persistence of the cuspiness problem of DM halos, WDM seems to be still worth taking seriously, as it alleviates the problems of overabundant sub-structures in galactic halos and possibly the lack of angular momentum of simulated disk galaxies. WDM also lessens the need to invoke strong feedback to solve these problems, and may provide a natural explanation of the clustering properties and ages of dwarfs.Comment: 11 pages, 17 figures, MNRAS submitted, high-res figures can be found at http://www-thphys.physics.ox.ac.uk/users/AlexanderKnebe/publications.html, replaced with accepted version (warmon masses corrected!

    Early and efficient detection of Mycobacterium tuberculosis in sputum by microscopic observation of broth cultures.

    Get PDF
    Early, efficient and inexpensive methods for the detection of pulmonary tuberculosis are urgently needed for effective patient management as well as to interrupt transmission. These methods to detect M. tuberculosis in a timely and affordable way are not yet widely available in resource-limited settings. In a developing-country setting, we prospectively evaluated two methods for culturing and detecting M. tuberculosis in sputum. Sputum samples were cultured in liquid assay (micro broth culture) in microplate wells and growth was detected by microscopic observation, or in Löwenstein-Jensen (LJ) solid media where growth was detected by visual inspection for colonies. Sputum samples were collected from 321 tuberculosis (TB) suspects attending Bugando Medical Centre, in Mwanza, Tanzania, and were cultured in parallel. Pulmonary tuberculosis cases were diagnosed using the American Thoracic Society diagnostic standards. There were a total of 200 (62.3%) pulmonary tuberculosis cases. Liquid assay with microscopic detection detected a significantly higher proportion of cases than LJ solid culture: 89.0% (95% confidence interval [CI], 84.7% to 93.3%) versus 77.0% (95% CI, 71.2% to 82.8%) (p = 0.0007). The median turn around time to diagnose tuberculosis was significantly shorter for micro broth culture than for the LJ solid culture, 9 days (interquartile range [IQR] 7-13), versus 21 days (IQR 14-28) (p<0.0001). The cost for micro broth culture (labor inclusive) in our study was US 4.56persample,versusUS4.56 per sample, versus US 11.35 per sample for the LJ solid culture. The liquid assay (micro broth culture) is an early, feasible, and inexpensive method for detection of pulmonary tuberculosis in resource limited settings

    GECO: Galaxy Evolution COde - A new semi-analytical model of galaxy formation

    Full text link
    We present a new semi-analytical model of galaxy formation, GECO (Galaxy Evolution COde), aimed at a better understanding of when and how the two processes of star formation and galaxy assembly have taken place. Our model is structured into a Monte Carlo algorithm based on the Extended Press-Schechter theory, for the representation of the merging hierarchy of dark matter halos, and a set of analytic algorithms for the treatment of the baryonic physics, including classical recipes for the gas cooling, the star formation time-scales, galaxy mergers and SN feedback. Together with the galaxies, the parallel growth of BHs is followed in time and their feedback on the hosting galaxies is modelled. We set the model free parameters by matching with data on local stellar mass functions and the BH-bulge relation at z=0. Based on such local boundary conditions, we investigate how data on the high-redshift universe constrain our understanding of the physical processes driving the evolution, focusing in particular on the assembly of stellar mass and on the star formation history. Since both processes are currently strongly constrained by cosmological near- and far-IR surveys, the basic physics of the Lambda CDM hierarchical clustering concept of galaxy formation can be effectively tested by us by comparison with the most reliable set of observables. Our investigation shows that when the time-scales of the stellar formation and mass assembly are studied as a function of dark matter halo mass and the single galaxy stellar mass, the 'downsizing' fashion of star formation appears to be a natural outcome of the model, reproduced even in the absence of the AGN feedback. On the contrary, the stellar mass assembly history turns out to follow a more standard hierarchical pattern progressive in cosmic time, with the more massive systems assembled at late times mainly through dissipationless mergers.Comment: Accepted for publication in A&A, 24 pages, 15 figure

    The SWIRE-VVDS-CFHTLS surveys: stellar mass assembly over the last 10 Gyears. Evidence for a major build up of the red sequence between z=2 and z=1

    Get PDF
    (abridged abstract) We present an analysis of the stellar mass growth over the last 10 Gyrs using a large 3.6ÎŒ\mu selected sample. We split our sample into active (blue) and quiescent (red) galaxies. Our measurements of the K-LFs and LD evolution support the idea that a large fraction of galaxies is already assembled at z∌1.2z\sim 1.2. Based on the analysis of the evolution of the stellar mass-to-light ratio (in K-band) for the spectroscopic sub-sample, we derive the stellar mass density for the entire sample. We find that the global evolution of the stellar mass density is well reproduced by the star formation rate derived from UV dust corrected measurements. Over the last 8Gyrs, we observe that the stellar mass density of the active population remains approximately constant while it gradually increases for the quiescent population over the same timescale. As a consequence, the growth of the stellar mass in the quiescent population must be due to the shutoff of star formation in active galaxies that migrate into the quiescent population. From z=2z=2 to z=1.2z=1.2, we observe a major build-up of the quiescent population with an increase by a factor of 10 in stellar mass, suggesting that we are observing the epoch when an increasing fraction of galaxies are ending their star formation activity and start to build up the red sequence.Comment: Accepted to A&A with major changes. 1 table and 13 figure

    Error bounds for monomial convexification in polynomial optimization

    Get PDF
    Convex hulls of monomials have been widely studied in the literature, and monomial convexifications are implemented in global optimization software for relaxing polynomials. However, there has been no study of the error in the global optimum from such approaches. We give bounds on the worst-case error for convexifying a monomial over subsets of [0,1]n[0,1]^n. This implies additive error bounds for relaxing a polynomial optimization problem by convexifying each monomial separately. Our main error bounds depend primarily on the degree of the monomial, making them easy to compute. Since monomial convexification studies depend on the bounds on the associated variables, in the second part, we conduct an error analysis for a multilinear monomial over two different types of box constraints. As part of this analysis, we also derive the convex hull of a multilinear monomial over [−1,1]n[-1,1]^n.Comment: 33 pages, 2 figures, to appear in journa
    • 

    corecore