206,506 research outputs found

    Higher-order Improvements of the Parametric Bootstrap for Markov Processes

    Get PDF
    This paper provides bounds on the errors in coverage probabilities of maximum likelihood-based, percentile-t, parametric bootstrap confidence intervals for Markov time series processes. These bounds show that the parametric bootstrap for Markov time series provides higher-order improvements (over confidence intervals based on first order asymptotics) that are comparable to those obtained by the parametric and nonparametric bootstrap for iid data and are better than those obtained by the block bootstrap for time series. Additional results are given for Wald-based confidence regions. The paper also shows that k-step parametric bootstrap confidence intervals achieve the same higher-order improvements as the standard parametric bootstrap for Markov processes. The k-step bootstrap confidence intervals are computationally attractive. They circumvent the need to compute a nonlinear optimization for each simulated bootstrap sample. The latter is necessary to implement the standard parametric bootstrap when the maximum likelihood estimator solves a nonlinear optimization problem.Asymptotics, Edgeworth expansion, Gauss-Newton, k-step bootstrap, maximum likelihood estimator, Newton-Raphson, parametric bootstrap, t statistic

    Construction of Automatic Confidence Intervals in Nonparametric Heteroscedastic Regression by a Moment-Oriented Bootstrap

    Get PDF
    We construct pointwise confidence intervals for regression functions. The method uses nonparametric kernel estimates and the “moment-oriented” bootstrap method of Bunke which is a wild bootstrap based on smoothed local estimators of higher order error moments. We show that our bootstrap consistently estimates the distribution of mh(x0) - m(xo). In the present paper we focus on fully data-driven procedures and prove that the confidence intervals give asymptotically correct coverage probabilities

    Higher-order Improvements of the Parametric Bootstrap for Markov Processes

    Get PDF
    This paper provides bounds on the errors in coverage probabilities of maximum likelihood-based, percentile- t , parametric bootstrap confidence intervals for Markov time series processes. These bounds show that the parametric bootstrap for Markov time series provides higher-order improvements (over confidence intervals based on first order asymptotics) that are comparable to those obtained by the parametric and nonparametric bootstrap for iid data and are better than those obtained by the block bootstrap for time series. Additional results are given for Wald-based confidence regions. The paper also shows that k -step parametric bootstrap confidence intervals achieve the same higher-order improvements as the standard parametric bootstrap for Markov processes. The k -step bootstrap confidence intervals are computationally attractive. They circumvent the need to compute a nonlinear optimization for each simulated bootstrap sample. The latter is necessary to implement the standard parametric bootstrap when the maximum likelihood estimator solves a nonlinear optimization problem

    Constraints on SN Ia progenitor time delays from high-z SNe and the star formation history

    Full text link
    We re-assess the question of a systematic time delay between the formation of the progenitor and its explosion in a type Ia supernova (SN Ia) using the Hubble Higher-z Supernova Search sample (Strolger et al. 2004). While the previous analysis indicated a significant time delay, with a most likely value of 3.4 Gyr, effectively ruling out all previously proposed progenitor models, our analysis shows that the time-delay estimate is dominated by systematic errors, in particular due to uncertainties in the star-formation history. We find that none of the popular progenitor models under consideration can be ruled out with any significant degree of confidence. The inferred time delay is mainly determined by the peak in the assumed star-formation history. We show that, even with a much larger Supernova sample, the time delay distribution cannot be reliably reconstructed without better constraints on the star-formation history.Comment: accepted for publication in MNRA

    Partition-dependent framing effects in lab and field prediction markets

    Get PDF
    Many psychology experiments show that individually judged probabilities of the same event can vary depending on the partition of the state space (a framing effect called "partition-dependence"). We show that these biases transfer to competitive prediction markets in which multiple informed traders are provided economic incentives to bet on their beliefs about events. We report results of a short controlled lab study, a longer field experiment (betting on the NBA playoffs and the FIFA World Cup), and naturally-occurring trading in macro-economic derivatives. The combined evidence suggests that partition-dependence can exist and persist in lab and field prediction markets

    Imprecise Probability and Chance

    Get PDF
    Understanding probabilities as something other than point values (e.g., as intervals) has often been motivated by the need to find more realistic models for degree of belief, and in particular the idea that degree of belief should have an objective basis in “statistical knowledge of the world.” I offer here another motivation growing out of efforts to understand how chance evolves as a function of time. If the world is “chancy” in that there are non-trivial, objective, physical probabilities at the macro-level, then the chance of an event e that happens at a given time is e goes to one continuously or not is left open. Discontinuities in such chance trajectories can have surprising and troubling consequences for probabilistic analyses of causation and accounts of how events occur in time. This, coupled with the compelling evidence for quantum discontinuities in chance’s evolution, gives rise to a “(dis)continuity bind” with respect to chance probability trajectories. I argue that a viable option for circumventing the (dis)continuity bind is to understand the probabilities “imprecisely,” that is, as intervals rather than point values. I then develop and motivate an alternative kind of continuity appropriate for interval-valued chance probability trajectories
    corecore