2,621 research outputs found

    The galaxy halo formation rate

    Get PDF
    The rate at which galaxy halos form is thought to play a key role in explaining many observable cosmological phenomena such as the initial epoch at which luminous matter forms and the distribution of active galaxies. Here we show how Press-Schechter theory can be used to provide a simple, completely analytic model of the halo formation rate. This model shows good agreement with both Monte-Carlo and N-body simulation results.Comment: 2 pages, 1 figure, to appear in proceedings of the Xth Recontres de Blois, "The Birth of Galaxies," LaTeX style file include

    Cosmological evolution and hierarchical galaxy formation

    Get PDF
    We provide a new multi-waveband compilation of the data describing the cosmological evolution of quasars, and discuss a model that attributes the evolution to variation in the rate of merging between dark halos in a hierarchical universe. We present a new Press-Schechter calculation of the expected merger rate and show that this can reproduce the principal features of the evolution. We also show that the evolution in the star-formation history of the universe is well-described by this model.Comment: 4 pages, 1 figure. Presented at Xth Rencontres de Blois, "The Birth of Galaxies", June 199

    The Halo Formation Rate and its link to the Global Star Formation Rate

    Get PDF
    The star formation history of the universe shows strong evolution with cosmological epoch. Although we know mergers between galaxies can cause luminous bursts of star formation, the relative importance of such mergers to the global star formation rate (SFR) is unknown. We present a simple analytic formula for the rate at which halos merge to form higher-mass systems, derived from Press-Schechter theory and confirmed by numerical simulations (for high halo masses). A comparison of the evolution in halo formation rate with the observed evolution in the global SFR indicates that the latter is largely driven by halo mergers at z>1. Recent numerical simulations by Kolatt et al. (1999) and Knebe & Muller (1999) show how merging systems are strongly biased tracers of mass fluctuations, thereby explaining the strong clustering observed for Lyman-break galaxies without any need to assume that Lyman-break galaxies are associated only with the most massive systems at z~3.Comment: 4 pages, 2 figures. To appear in `The Hy-redshift universe: Galaxy formation and evolution at high redshift' eds. A.J. Bunker and W.J.M. van Breuge

    An analytic model for the epoch of halo creation

    Full text link
    In this paper we describe the Bayesian link between the cosmological mass function and the distribution of times at which isolated halos of a given mass exist. By assuming that clumps of dark matter undergo monotonic growth on the time-scales of interest, this distribution of times is also the distribution of `creation' times of the halos. This monotonic growth is an inevitable aspect of gravitational instability. The spherical top-hat collapse model is used to estimate the rate at which clumps of dark matter collapse. This gives the prior for the creation time given no information about halo mass. Applying Bayes' theorem then allows any mass function to be converted into a distribution of times at which halos of a given mass are created. This general result covers both Gaussian and non-Gaussian models. We also demonstrate how the mass function and the creation time distribution can be combined to give a joint density function, and discuss the relation between the time distribution of major merger events and the formula calculated. Finally, we determine the creation time of halos within three N-body simulations, and compare the link between the mass function and creation rate with the analytic theory.Comment: 7 pages, 2 figures, submitted to MNRA

    Age constraints on the evolution of the Quetico belt, Superior Province, Ontario

    Get PDF
    Much attention has been focused on the nature of Archean tectonic processes and the extent to which they were different from modern rigid-plate tectonics. The Archean Superior Province has linear metavolcanic and metasediment-dominated subprovinces of similar scale to cenozoic island arc-trench systems of the western Pacific, suggesting an origin by accreting arcs. Models of the evolution of metavolcanic belts in parts of the Superior Province suggest an arc setting but the tectonic environment and evolution of the intervening metasedimentary belts are poorly understood. In addition to explaining the setting giving rise to a linear sedimentary basin, models must account for subsequent shortening and high-temperature, low-pressure metamorphism. Correlation of rock units and events in adjacent metavolcanic and metasedimentary belts is a first step toward understanding large-scale crustal interactions. To this end, zircon geochronology has been applied to metavolcanic belts of the western Superior Province; new age data for the Quetico metasedimentary belt is reported, permitting correlation with the adjacent Wabigoon and Wawa metavolcanic subprovinces

    Damped Lyman alpha systems and disk galaxies: number density, column density distribution and gas density

    Full text link
    We present a comparison between the observed properties of damped Lyman alpha systems (DLAs) and the predictions of simple models for the evolution of present day disk galaxies, including both low and high surface brightness galaxies. We focus in particular on the number density, column density distribution and gas density of DLAs, which have now been measured in relatively large samples of absorbers. From the comparison we estimate the contribution of present day disk galaxies to the population of DLAs, and how it varies with redshift. Based on the differences between the models and the observations, we also speculate on the nature of the fraction of DLAs which apparently do not arise in disk galaxies.Comment: 11 pages, 10 figures, accepted in MNRA

    Using galaxy pairs as cosmological tracers

    Full text link
    The Alcock-Paczynski (AP) effect uses the fact that, when analyzed with the correct geometry, we should observe structure that is statistically isotropic in the Universe. For structure undergoing cosmological expansion with the background, this constrains the product of the Hubble parameter and the angular diameter distance. However, the expansion of the Universe is inhomogeneous and local curvature depends on density. We argue that this distorts the AP effect on small scales. After analyzing the dynamics of galaxy pairs in the Millennium simulation, we find an interplay between peculiar velocities, galaxy properties and local density that affects how pairs trace cosmological expansion. We find that only low mass, isolated galaxy pairs trace the average expansion with a minimum "correction" for peculiar velocities. Other pairs require larger, more cosmology and redshift dependent peculiar velocity corrections and, in the small-separation limit of being bound in a collapsed system, do not carry cosmological information.Comment: 15 pages, 14 figures, 1 tabl

    Formation of Dark Matter Haloes in a Homogeneous Dark Energy Universe

    Full text link
    Several independent cosmological tests have shown evidences that the energy density of the Universe is dominated by a dark energy component, which cause the present accelerated expansion. The large scale structure formation can be used to probe dark energy models, and the mass function of dark matter haloes is one of the best statistical tools to perform this study. We present here a statistical analysis of mass functions of galaxies under a homogeneous dark energy model, proposed in the work of Percival (2005), using an observational flux-limited X-ray cluster survey, and CMB data from WMAP. We compare, in our analysis, the standard Press-Schechter (PS) approach (where a Gaussian distribution is used to describe the primordial density fluctuation field of the mass function), and the PL (Power Law) mass function (where we apply a nonextensive q-statistical distribution to the primordial density field). We conclude that the PS mass function cannot explain at the same time the X-ray and the CMB data (even at 99% confidence level), and the PS best fit dark energy equation of state parameter is ω=0.58\omega=-0.58, which is distant from the cosmological constant case. The PL mass function provides better fits to the HIFLUGCS X-ray galaxy data and the CMB data; we also note that the ω\omega parameter is very sensible to modifications in the PL free parameter, qq, suggesting that the PL mass function could be a powerful tool to constrain dark energy models.Comment: 4 pages, 2 figures, Latex. Accepted for publication in the International Journal of Modern Physics D (IJMPD)

    Cosmological parameter inference from galaxy clustering: the effect of the posterior distribution of the power spectrum

    Get PDF
    Citation: Kalus, B., Percival, W. J., & Samushia, L. (2016). YCosmological parameter inference from galaxy clustering: the effect of the posterior distribution of the power spectrum. Monthly Notices of the Royal Astronomical Society, 455(3), 2573-2581. doi:10.1093/mnras/stv2307We consider the shape of the posterior distribution to be used when fitting cosmological models to power spectra measured from galaxy surveys. At very large scales, Gaussian posterior distributions in the power do not approximate the posterior distribution P-R we expect for a Gaussian density field delta(k), even if we vary the covariance matrix according to the model to be tested. We compare alternative posterior distributions with P-R, both mode-by-mode and in terms of expected measurements of primordial non-Gaussianity parametrized by f(NL). Marginalising over a Gaussian posterior distribution P-f with fixed covariance matrix yields a posterior mean value of f(NL) which, for a data set with the characteristics of Euclid, will be underestimated by Delta f(NL) = 0.4, while for the data release 9 of the Sloan Digital Sky Survey-III Baryon Oscillation Spectroscopic Survey (BOSS DR9; Ahn et al.) it will be underestimated by Delta f(NL) = 19.1. Adopting a different form of the posterior function means that we do not necessarily require a different covariance matrix for each model to be tested: this dependence is absorbed into the functional form of the posterior. Thus, the computational burden of analysis is significantly reduced

    Efficient transfer of images over networks

    Get PDF
    Effective remote observing requires sending large images over long distances. The usual approach to the transfer problem is to require high bandwidth transmission links, which are expensive to install and operate. An alternative approach is to use existing low-bandwidth connections, such as phone lines or the Internet, in a highly efficient manner by compressing the images. The combined use of existing low-cost infrastructure and standard networking software means that remote observing can be made practical even for small observatories with limited network resources. The authors have implemented such a scheme based on the H-transform compression method developed for astronomical images, which are often resistant to compression because they are noisy. The H-transform can be used for either lossy or lossless compression, and compression factors of at least 10 can be achieved with no noticeable losses in the astrometric or photometric properties of the compressed images. The H-transform allows us to organize the information in an image so that the 'useful' information can be sent first, followed by the noise, which makes up the bulk of the transmission. The receiver can invert a partially received set of H-coefficients, creating an image that improves with time. The H-transform is particularly well-suited to this style of incremental reconstruction, because the spatially localized nature of the basis functions of the H-transorm prevents the appearance of artifacts such as ringing around point sources and edges. The authors' implementation uses the WIYN Telescope Control System's TCP-based communications protocol. An 800x800 16-bit astronomical image was sent over a 2400 baud connection, which would normally take about 71 minutes; after only 60 seconds, the partially received H-transform produced an image that did not differ appreciably from the original. This poster presents a quantification of the efficiencies, as well as examples of images reconstructed from partial data
    corecore