1,039 research outputs found

    Self Calibration of Tomographic Weak Lensing for the Physics of Baryons to Constrain Dark Energy

    Full text link
    Numerical studies indicate that uncertainties in the treatment of baryonic physics can affect predictions for shear power spectra at a level that is significant for forthcoming surveys such as DES, SNAP, and LSST. Correspondingly, we show that baryonic effects can significantly bias dark energy parameter measurements. Eliminating such biases by neglecting information in multipoles beyond several hundred leads to weaker parameter constraints by a factor of approximately 2 to 3 compared with using information out to multipoles of several thousand. Fortunately, the same numerical studies that explore the influence of baryons indicate that they primarily affect power spectra by altering halo structure through the relation between halo mass and mean effective halo concentration. We explore the ability of future weak lensing surveys to constrain both the internal structures of halos and the properties of the dark energy simultaneously as a first step toward self calibrating for the physics of baryons. This greatly reduces parameter biases and no parameter constraint is degraded by more than 40% in the case of LSST or 30% in the cases of SNAP or DES. Modest prior knowledge of the halo concentration relation greatly improves even these forecasts. Additionally, we find that these surveys can constrain effective halo concentrations near m~10^14 Msun/h and z~0.2 to better than 10% with shear power spectra alone. These results suggest that inferring dark energy parameters with measurements of shear power spectra can be made robust to baryonic effects and may simultaneously be competitive with other methods to inform models of galaxy formation. (Abridged)Comment: 18 pages, 11 figures. Minor changes reflecting referee's comments. Results and conclusions unchanged. Accepted for publication in Physical Review

    Groups of two galaxies in SDSS: implications of colours on star formation quenching time-scales

    Full text link
    We have devised a method to select galaxies that are isolated in their dark matter halo (N=1 systems) and galaxies that reside in a group of exactly two (N=2 systems). Our N=2 systems are widely-separated (up to \sim\,200\,h1h^{-1}\,kpc), where close galaxy-galaxy interactions are not dominant. We apply our selection criteria to two volume-limited samples of galaxies from SDSS DR6 with Mr5log10hM_{r}-5 \log_{10} h \leq -19 and -20 to study the effects of the environment of very sparse groups on galaxy colour. For satellite galaxies in a group of two, we find a red excess attributed to star formation quenching of 0.15\,±\pm\,0.01 and 0.14\,±\pm\,0.01 for the -19 and -20 samples, respectively, relative to isolated galaxies of the same stellar mass. Assuming N=1 systems are the progenitors of N=2 systems, an immediate-rapid star formation quenching scenario is inconsistent with these observations. A delayed-then-rapid star formation quenching scenario with a delay time of 3.3 and 3.7\,Gyr for the -19 and -20 samples, respectively, yields a red excess prediction in agreement with the observations. The observations also reveal that central galaxies in a group of two have a slight blue excess of 0.06\,±\pm\,0.02 and 0.02\,±\pm\,0.01 for the -19 and -20 samples, respectively, relative to N=1 populations of the same stellar mass. Our results demonstrate that even the environment of very sparse groups of luminous galaxies influence galaxy evolution and in-depth studies of these simple systems are an essential step towards understanding galaxy evolution in general.Comment: 17 pages, 11 figures, accepted to MNRA

    Sufficient principal component regression for pattern discovery in transcriptomic data

    Full text link
    Methods for global measurement of transcript abundance such as microarrays and RNA-seq generate datasets in which the number of measured features far exceeds the number of observations. Extracting biologically meaningful and experimentally tractable insights from such data therefore requires high-dimensional prediction. Existing sparse linear approaches to this challenge have been stunningly successful, but some important issues remain. These methods can fail to select the correct features, predict poorly relative to non-sparse alternatives, or ignore any unknown grouping structures for the features. We propose a method called SuffPCR that yields improved predictions in high-dimensional tasks including regression and classification, especially in the typical context of omics with correlated features. SuffPCR first estimates sparse principal components and then estimates a linear model on the recovered subspace. Because the estimated subspace is sparse in the features, the resulting predictions will depend on only a small subset of genes. SuffPCR works well on a variety of simulated and experimental transcriptomic data, performing nearly optimally when the model assumptions are satisfied. We also demonstrate near-optimal theoretical guarantees.Comment: 26 pages, 9 figures, 9 table

    The Coarse Geometry of Merger Trees in \Lambda CDM

    Full text link
    We introduce the contour process to describe the geometrical properties of merger trees. The contour process produces a one-dimensional object, the contour walk, which is a translation of the merger tree. We portray the contour walk through its length and action. The length is proportional to to the number of progenitors in the tree, and the action can be interpreted as a proxy of the mean length of a branch in a merger tree. We obtain the contour walk for merger trees extracted from the public database of the Millennium Run and also for merger trees constructed with a public Monte-Carlo code which implements a Markovian algorithm. The trees correspond to halos of final masses between 10^{11} h^{-1} M_sol and 10^{14} h^{-1} M_sol. We study how the length and action of the walks evolve with the mass of the final halo. In all the cases, except for the action measured from Markovian trees, we find a transitional scale around 3 \times 10^{12} h^{-1} M_sol. As a general trend the length and action measured from the Markovian trees show a large scatter in comparison with the case of the Millennium Run trees.Comment: 7 pages, 5 figures, submitted to MNRA

    Bailing Out the Milky Way: Variation in the Properties of Massive Dwarfs Among Galaxy-Sized Systems

    Full text link
    Recent kinematical constraints on the internal densities of the Milky Way's dwarf satellites have revealed a discrepancy with the subhalo populations of simulated Galaxy-scale halos in the standard CDM model of hierarchical structure formation. This has been dubbed the "too big to fail" problem, with reference to the improbability of large and invisible companions existing in the Galactic environment. In this paper, we argue that both the Milky Way observations and simulated subhalos are consistent with the predictions of the standard model for structure formation. Specifically, we show that there is significant variation in the properties of subhalos among distinct host halos of fixed mass and suggest that this can reasonably account for the deficit of dense satellites in the Milky Way. We exploit well-tested analytic techniques to predict the properties in a large sample of distinct host halos with a variety of masses spanning the range expected of the Galactic halo. The analytic model produces subhalo populations consistent with both Via Lactea II and Aquarius, and our results suggest that natural variation in subhalo properties suffices to explain the discrepancy between Milky Way satellite kinematics and these numerical simulations. At least ~10% of Milky Way-sized halos host subhalo populations for which there is no "too big to fail" problem, even when the host halo mass is as large as M_host = 10^12.2 h^-1 M_sun. Follow-up studies consisting of high-resolution simulations of a large number of Milky Way-sized hosts are necessary to confirm our predictions. In the absence of such efforts, the "too big to fail" problem does not appear to be a significant challenge to the standard model of hierarchical formation. [abridged]Comment: 12 pages, 3 figures; accepted by JCAP. Replaced with published versio

    Economics of tillage management systems in northeastern Alberta

    Get PDF
    Non-Peer ReviewedThe economic returns and riskiness of continuous barley production using four tillage management systems were compared at five sites in three soil zones in northeastern Alberta. The study used five years of data from a tillage experiment in northeastern Alberta. The four tillage systems included conventional one (C1), which leaves 5% standing stubble, conventional two (C2), which leaves 50% standing stubble, minimum-tillage (Min), and zero-tillage (ZT). Economic calculations were based on 1992 input costs and product prices. The systems were evaluated at barley prices of 46,46, 69, and $92 t-1, calculated with and without all risk crop insurance. Over the five sites the expected net returns were generally higher for ZT at all barley prices. Income variability was usually lower for ZT and C2 depending on the site. The study concluded that use of reduced tillage management systems by producers in northeastern Alberta could increase farm-level returns and reduce the risk of financial loss, while potentially decreasing the amount of soil erosion
    corecore