193 research outputs found

    Scale dependence of galaxy biasing investigated by weak gravitational lensing: An assessment using semi-analytic galaxies and simulated lensing data

    Full text link
    Galaxies are biased tracers of the matter density on cosmological scales. For future tests of galaxy models, we refine and assess a method to measure galaxy biasing as function of physical scale kk with weak gravitational lensing. This method enables us to reconstruct the galaxy bias factor b(k)b(k) as well as the galaxy-matter correlation r(k)r(k) on spatial scales between 0.01 h Mpc−1≲k≲10 h Mpc−10.01\,h\,{\rm Mpc^{-1}}\lesssim k\lesssim10\,h\,{\rm Mpc^{-1}} for redshift-binned lens galaxies below redshift z≲0.6z\lesssim0.6. In the refinement, we account for an intrinsic alignment of source ellipticities, and we correct for the magnification bias of the lens galaxies, relevant for the galaxy-galaxy lensing signal, to improve the accuracy of the reconstructed r(k)r(k). For simulated data, the reconstructions achieve an accuracy of 3−7%3-7\% (68\% confidence level) over the above kk-range for a survey area and a typical depth of contemporary ground-based surveys. Realistically the accuracy is, however, probably reduced to about 10−15%10-15\%, mainly by systematic uncertainties in the assumed intrinsic source alignment, the fiducial cosmology, and the redshift distributions of lens and source galaxies (in that order). Furthermore, our reconstruction technique employs physical templates for b(k)b(k) and r(k)r(k) that elucidate the impact of central galaxies and the halo-occupation statistics of satellite galaxies on the scale-dependence of galaxy bias, which we discuss in the paper. In a first demonstration, we apply this method to previous measurements in the Garching-Bonn-Deep Survey and give a physical interpretation of the lens population.Comment: 31 pages, 16 figures; corrected typos in Eqs. (31), (34), and (36

    Meaning of temperature in different thermostatistical ensembles

    Get PDF
    Depending on the exact experimental conditions, the thermodynamic properties of physical systems can be related to one or more thermostatistical ensembles. Here, we survey the notion of thermodynamic temperature in different statistical ensembles, focusing in particular on subtleties that arise when ensembles become non-equivalent. The 'mother' of all ensembles, the microcanonical ensemble, uses entropy and internal energy (the most fundamental, dynamically conserved quantity) to derive temperature as a secondary thermodynamic variable. Over the past century, some confusion has been caused by the fact that several competing microcanonical entropy definitions are used in the literature, most commonly the volume and surface entropies introduced by Gibbs. It can be proved, however, that only the volume entropy satisfies exactly the traditional form of the laws of thermodynamics for a broad class of physical systems, including all standard classical Hamiltonian systems, regardless of their size. This mathematically rigorous fact implies that negative 'absolute' temperatures and Carnot efficiencies >1>1 are not achievable within a standard thermodynamical framework. As an important offspring of microcanonical thermostatistics, we shall briefly consider the canonical ensemble and comment on the validity of the Boltzmann weight factor. We conclude by addressing open mathematical problems that arise for systems with discrete energy spectrum.Comment: 11 pages, 1 figur

    Thermodynamic laws in isolated systems

    Get PDF
    The recent experimental realization of exotic matter states in isolated quantum systems and the ensuing controversy about the existence of negative absolute temperatures demand a careful analysis of the conceptual foundations underlying microcanonical thermostatistics. Here, we provide a detailed comparison of the most commonly considered microcanonical entropy definitions, focussing specifically on whether they satisfy or violate the zeroth, first and second law of thermodynamics. Our analysis shows that, for a broad class of systems that includes all standard classical Hamiltonian systems, only the Gibbs volume entropy fulfills all three laws simultaneously. To avoid ambiguities, the discussion is restricted to exact results and analytically tractable examples.Comment: footnotes 19, 26 and outlook section adde

    Confronting semi-analytic galaxy models with galaxy-matter correlations observed by CFHTLenS

    Full text link
    Testing predictions of semi-analytic models of galaxy evolution against observations help to understand the complex processes that shape galaxies. We compare predictions from the Garching and Durham models implemented on the Millennium Run with observations of galaxy-galaxy lensing (GGL) and galaxy-galaxy-galaxy lensing (G3L) for various galaxy samples with stellar masses in the range 0.5 < (M_* / 10^10 M_Sun) < 32 and photometric redshift range 0.2 < z < 0.6 in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We find that the predicted GGL and G3L signals are in qualitative agreement with CFHTLenS data. Quantitatively, the models succeed in reproducing the observed signals in the highest stellar mass bin (16 < ( M_* / 10^10 M_Sun) < 32) but show different degrees of tension for the other stellar mass samples. The Durham models are strongly excluded at the 95% confidence level by the observations as they largely over-predict the amplitudes of the GGL and G3L signals, probably because they predict too many satellite galaxies in massive halos.Comment: 9 pages, 8 figures, submitted to A&A. Comments welcom

    Bayesian weak lensing tomography: Reconstructing the 3D large-scale distribution of matter with a lognormal prior

    Full text link
    We present a Bayesian reconstruction algorithm that infers the three-dimensional large-scale matter distribution from the weak gravitational lensing effects measured in the image shapes of galaxies. The algorithm is designed to also work with non-Gaussian posterior distributions which arise, for example, from a non-Gaussian prior distribution. In this work, we use a lognormal prior and compare the reconstruction results to a Gaussian prior in a suite of increasingly realistic tests on mock data. We find that in cases of high noise levels (i.e. for low source galaxy densities and/or high shape measurement uncertainties), both normal and lognormal priors lead to reconstructions of comparable quality, but with the lognormal reconstruction being prone to mass-sheet degeneracy. In the low-noise regime and on small scales, the lognormal model produces better reconstructions than the normal model: The lognormal model 1) enforces non-negative densities, while negative densities are present when a normal prior is employed, 2) better traces the extremal values and the skewness of the true underlying distribution, and 3) yields a higher pixel-wise correlation between the reconstruction and the true density.Comment: 23 pages, 12 figures; updated to match version accepted for publication in PR
    • …
    corecore