45 research outputs found

    A quantification of hydrodynamical effects on protoplanetary dust growth

    Full text link
    Context. The growth process of dust particles in protoplanetary disks can be modeled via numerical dust coagulation codes. In this approach, physical effects that dominate the dust growth process often must be implemented in a parameterized form. Due to a lack of these parameterizations, existing studies of dust coagulation have ignored the effects a hydrodynamical gas flow can have on grain growth, even though it is often argued that the flow could significantly contribute either positively or negatively to the growth process. Aims. We intend to provide a quantification of hydrodynamical effects on the growth of dust particles, such that these effects can be parameterized and implemented in a dust coagulation code. Methods. We numerically integrate the trajectories of small dust particles in the flow of disk gas around a proto-planetesimal, sampling a large parameter space in proto-planetesimal radii, headwind velocities, and dust stopping times. Results. The gas flow deflects most particles away from the proto-planetesimal, such that its effective collisional cross section, and therefore the mass accretion rate, is reduced. The gas flow however also reduces the impact velocity of small dust particles onto a proto-planetesimal. This can be beneficial for its growth, since large impact velocities are known to lead to erosion. We also demonstrate why such a gas flow does not return collisional debris to the surface of a proto-planetesimal. Conclusions. We predict that a laminar hydrodynamical flow around a proto-planetesimal will have a significant effect on its growth. However, we cannot easily predict which result, the reduction of the impact velocity or the sweep-up cross section, will be more important. Therefore, we provide parameterizations ready for implementation into a dust coagulation code.Comment: 9 pages, 6 figures; accepted for publication in A&A; v2 matches the manuscript sent to the publisher (very minor changes

    The skewed weak lensing likelihood: why biases arise, despite data and theory being sound

    Get PDF
    We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses

    Massive data compression for parameter-dependent covariance matrices

    Get PDF
    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible

    Quantifying lost information due to covariance matrix estimation in parameter inference

    Get PDF
    Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing the Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit, finding that significantly fewer simulations than previously thought are sufficient to reach satisfactory precisions. We apply our results to DES Science Verification weak lensing data, detecting a 10 per cent loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1 per cent, with an additional uncertainty of about 2 per cent. Without any nuisance parameters, 1900 simulations are sufficient to only lose 1 per cent of information. We further derive estimators for all quantities needed for forecasting with estimated covariance matrices. Our formalism allows to determine the sweetspot between running sophisticated simulations to reduce the number of nuisance parameters, and running as many fast simulations as possible

    Identifying the most constraining ice observations to infer molecular binding energies

    Get PDF
    Computational astrophysic

    Matching Bayesian and frequentist coverage probabilities when using an approximate data covariance matrix

    Get PDF
    Observational astrophysics consists of making inferences about the Universe by comparing data and models. The credible intervals placed on model parameters are often as important as the maximum a posteriori probability values, as the intervals indicate concordance or discordance between models and with measurements from other data. Intermediate statistics (e.g. the power spectrum) are usually measured and inferences are made by fitting models to these rather than the raw data, assuming that the likelihood for these statistics has multivariate Gaussian form. The covariance matrix used to calculate the likelihood is often estimated from simulations, such that it is itself a random variable. This is a standard problem in Bayesian statistics, which requires a prior to be placed on the true model parameters and covariance matrix, influencing the joint posterior distribution. As an alternative to the commonly used independence Jeffreys prior, we introduce a prior that leads to a posterior that has approximately frequentist matching coverage. This is achieved by matching the covariance of the posterior to that of the distribution of true values of the parameters around the maximum likelihood values in repeated trials, under certain assumptions. Using this prior, credible intervals derived from a Bayesian analysis can be interpreted approximately as confidence intervals, containing the truth a certain proportion of the time for repeated trials. Linking frequentist and Bayesian approaches that have previously appeared in the astronomical literature, this offers a consistent and conservative approach for credible intervals quoted on model parameters for problems where the covariance matrix is itself an estimate

    Almanac: MCMC-based signal extraction of power spectra and maps on the sphere

    Get PDF
    Inference in cosmology often starts with noisy observations of random fields on the celestial sphere, such as maps of the microwave background radiation, continuous maps of cosmic structure in different wavelengths, or maps of point tracers of the cosmological fields. Almanac uses Hamiltonian Monte Carlo sampling to infer the underlying all-sky noiseless maps of cosmic structures, in multiple redshift bins, together with their auto- and cross-power spectra. It can sample many millions of parameters, handling the highly variable signal-to-noise of typical cosmological signals, and it provides science-ready posterior data products. In the case of spin-weight 2 fields, Almanac infers EE- and BB-mode power spectra and parity-violating EBEB power, and, by sampling the full posteriors rather than point estimates, it avoids the problem of EBEB-leakage. For theories with no BB-mode signal, inferred non-zero BB-mode power may be a useful diagnostic of systematic errors or an indication of new physics. Almanac's aim is to characterise the statistical properties of the maps, with outputs that are completely independent of the cosmological model, beyond an assumption of statistical isotropy. Inference of parameters of any particular cosmological model follows in a separate analysis stage. We demonstrate our signal extraction on a CMB-like experiment.Comment: 27 pages, 18 figures. v2 accepted for publication by The Open Journal of Astrophysics with minor changes. v3 no changes, missing acknowledgement adde
    corecore