17,267 research outputs found

    Constraints on SN Ia progenitor time delays from high-z SNe and the star formation history

    Full text link
    We re-assess the question of a systematic time delay between the formation of the progenitor and its explosion in a type Ia supernova (SN Ia) using the Hubble Higher-z Supernova Search sample (Strolger et al. 2004). While the previous analysis indicated a significant time delay, with a most likely value of 3.4 Gyr, effectively ruling out all previously proposed progenitor models, our analysis shows that the time-delay estimate is dominated by systematic errors, in particular due to uncertainties in the star-formation history. We find that none of the popular progenitor models under consideration can be ruled out with any significant degree of confidence. The inferred time delay is mainly determined by the peak in the assumed star-formation history. We show that, even with a much larger Supernova sample, the time delay distribution cannot be reliably reconstructed without better constraints on the star-formation history.Comment: accepted for publication in MNRA

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    Iterated smoothed bootstrap confidence intervals for population quantiles

    Get PDF
    This paper investigates the effects of smoothed bootstrap iterations on coverage probabilities of smoothed bootstrap and bootstrap-t confidence intervals for population quantiles, and establishes the optimal kernel bandwidths at various stages of the smoothing procedures. The conventional smoothed bootstrap and bootstrap-t methods have been known to yield one-sided coverage errors of orders O(n^{-1/2}) and o(n^{-2/3}), respectively, for intervals based on the sample quantile of a random sample of size n. We sharpen the latter result to O(n^{-5/6}) with proper choices of bandwidths at the bootstrapping and Studentization steps. We show further that calibration of the nominal coverage level by means of the iterated bootstrap succeeds in reducing the coverage error of the smoothed bootstrap percentile interval to the order O(n^{-2/3}) and that of the smoothed bootstrap-t interval to O(n^{-58/57}), provided that bandwidths are selected of appropriate orders. Simulation results confirm our asymptotic findings, suggesting that the iterated smoothed bootstrap-t method yields the most accurate coverage. On the other hand, the iterated smoothed bootstrap percentile method interval has the advantage of being shorter and more stable than the bootstrap-t intervals.Comment: Published at http://dx.doi.org/10.1214/009053604000000878 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bayesian learning of models for estimating uncertainty in alert systems: application to air traffic conflict avoidance

    Get PDF
    Alert systems detect critical events which can happen in the short term. Uncertainties in data and in the models used for detection cause alert errors. In the case of air traffic control systems such as Short-Term Conflict Alert (STCA), uncertainty increases errors in alerts of separation loss. Statistical methods that are based on analytical assumptions can provide biased estimates of uncertainties. More accurate analysis can be achieved by using Bayesian Model Averaging, which provides estimates of the posterior probability distribution of a prediction. We propose a new approach to estimate the prediction uncertainty, which is based on observations that the uncertainty can be quantified by variance of predicted outcomes. In our approach, predictions for which variances of posterior probabilities are above a given threshold are assigned to be uncertain. To verify our approach we calculate a probability of alert based on the extrapolation of closest point of approach. Using Heathrow airport flight data we found that alerts are often generated under different conditions, variations in which lead to alert detection errors. Achieving 82.1% accuracy of modelling the STCA system, which is a necessary condition for evaluating the uncertainty in prediction, we found that the proposed method is capable of reducing the uncertain component. Comparison with a bootstrap aggregation method has demonstrated a significant reduction of uncertainty in predictions. Realistic estimates of uncertainties will open up new approaches to improving the performance of alert systems
    corecore