203,667 research outputs found

    The promising future of a robust cosmological neutrino mass measurement

    Get PDF
    We forecast the sensitivity of thirty-five different combinations of future Cosmic Microwave Background and Large Scale Structure data sets to cosmological parameters and to the total neutrino mass. We work under conservative assumptions accounting for uncertainties in the modelling of systematics. In particular, for galaxy redshift surveys, we remove the information coming from non-linear scales. We use Bayesian parameter extraction from mock likelihoods to avoid Fisher matrix uncertainties. Our grid of results allows for a direct comparison between the sensitivity of different data sets. We find that future surveys will measure the neutrino mass with high significance and will not be substantially affected by potential parameter degeneracies between neutrino masses, the density of relativistic relics, and a possible time-varying equation of state of Dark Energy.Comment: 27 pages, 4 figures, 8 tables. v2: updated Euclid sensitivity settings, matches published versio

    Sensitivity And Out-Of-Sample Error in Continuous Time Data Assimilation

    Full text link
    Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time--tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established which allows to calculate the latter under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, aka synchronisation. Numerical examples demonstrate the feasibility of the approach.Comment: submitted to Quarterly Journal of the Royal Meteorological Societ

    Nonparametric Covariate Adjustment for Receiver Operating Characteristic Curves

    Full text link
    The accuracy of a diagnostic test is typically characterised using the receiver operating characteristic (ROC) curve. Summarising indexes such as the area under the ROC curve (AUC) are used to compare different tests as well as to measure the difference between two populations. Often additional information is available on some of the covariates which are known to influence the accuracy of such measures. We propose nonparametric methods for covariate adjustment of the AUC. Models with normal errors and non-normal errors are discussed and analysed separately. Nonparametric regression is used for estimating mean and variance functions in both scenarios. In the general noise case we propose a covariate-adjusted Mann-Whitney estimator for AUC estimation which effectively uses available data to construct working samples at any covariate value of interest and is computationally efficient for implementation. This provides a generalisation of the Mann-Whitney approach for comparing two populations by taking covariate effects into account. We derive asymptotic properties for the AUC estimators in both settings, including asymptotic normality, optimal strong uniform convergence rates and MSE consistency. The usefulness of the proposed methods is demonstrated through simulated and real data examples

    Differentially Private Model Selection with Penalized and Constrained Likelihood

    Full text link
    In statistical disclosure control, the goal of data analysis is twofold: The released information must provide accurate and useful statistics about the underlying population of interest, while minimizing the potential for an individual record to be identified. In recent years, the notion of differential privacy has received much attention in theoretical computer science, machine learning, and statistics. It provides a rigorous and strong notion of protection for individuals' sensitive information. A fundamental question is how to incorporate differential privacy into traditional statistical inference procedures. In this paper we study model selection in multivariate linear regression under the constraint of differential privacy. We show that model selection procedures based on penalized least squares or likelihood can be made differentially private by a combination of regularization and randomization, and propose two algorithms to do so. We show that our private procedures are consistent under essentially the same conditions as the corresponding non-private procedures. We also find that under differential privacy, the procedure becomes more sensitive to the tuning parameters. We illustrate and evaluate our method using simulation studies and two real data examples
    • …
    corecore