2,062 research outputs found

    Optimal stopping times for estimating Bernoulli parameters with applications to active imaging

    Full text link
    We address the problem of estimating the parameter of a Bernoulli process. This arises in many applications, including photon-efficient active imaging where each illumination period is regarded as a single Bernoulli trial. We introduce a framework within which to minimize the mean-squared error (MSE) subject to an upper bound on the mean number of trials. This optimization has several simple and intuitive properties when the Bernoulli parameter has a beta prior. In addition, by exploiting typical spatial correlation using total variation regularization, we extend the developed framework to a rectangular array of Bernoulli processes representing the pixels in a natural scene. In simulations inspired by realistic active imaging scenarios, we demonstrate a 4.26 dB reduction in MSE due to the adaptive acquisition, as an average over many independent experiments and invariant to a factor of 3.4 variation in trial budget.Accepted manuscrip

    Beyond Binomial and Negative Binomial: Adaptation in Bernoulli Parameter Estimation

    Full text link
    Estimating the parameter of a Bernoulli process arises in many applications, including photon-efficient active imaging where each illumination period is regarded as a single Bernoulli trial. Motivated by acquisition efficiency when multiple Bernoulli processes are of interest, we formulate the allocation of trials under a constraint on the mean as an optimal resource allocation problem. An oracle-aided trial allocation demonstrates that there can be a significant advantage from varying the allocation for different processes and inspires a simple trial allocation gain quantity. Motivated by realizing this gain without an oracle, we present a trellis-based framework for representing and optimizing stopping rules. Considering the convenient case of Beta priors, three implementable stopping rules with similar performances are explored, and the simplest of these is shown to asymptotically achieve the oracle-aided trial allocation. These approaches are further extended to estimating functions of a Bernoulli parameter. In simulations inspired by realistic active imaging scenarios, we demonstrate significant mean-squared error improvements: up to 4.36 dB for the estimation of p and up to 1.80 dB for the estimation of log p.Comment: 13 pages, 16 figure

    Beyond binomial and negative binomial: adaptation in Bernoulli parameter estimation

    Full text link
    Estimating the parameter of a Bernoulli process arises in many applications, including photon-efficient active imaging where each illumination period is regarded as a single Bernoulli trial. Motivated by acquisition efficiency when multiple Bernoulli processes (e.g., multiple pixels) are of interest, we formulate the allocation of trials under a constraint on the mean as an optimal resource allocation problem. An oracle-aided trial allocation demonstrates that there can be a significant advantage from varying the allocation for different processes and inspires the introduction of a simple trial allocation gain quantity. Motivated by achieving this gain without an oracle, we present a trellis-based framework for representing and optimizing stopping rules. Considering the convenient case of Beta priors, three implementable stopping rules with similar performances are explored, and the simplest of these is shown to asymptotically achieve the oracle-aided trial allocation. These approaches are further extended to estimating functions of a Bernoulli parameter. In simulations inspired by realistic active imaging scenarios, we demonstrate significant mean-squared error improvements up to 4.36 dB for the estimation of p and up to 1.86 dB for the estimation of log p.https://arxiv.org/abs/1809.08801https://arxiv.org/abs/1809.08801First author draf

    A note on error estimation for hypothesis testing problems for some linear SPDEs

    Get PDF
    The aim of the present paper is to estimate and control the Type I and Type II errors of a simple hypothesis testing problem of the drift/viscosity coefficient for stochastic fractional heat equation driven by additive noise. Assuming that one path of the first NN Fourier modes of the solution is observed continuously over a finite time interval [0,T][0,T], we propose a new class of rejection regions and provide computable thresholds for TT, and NN, that guarantee that the statistical errors are smaller than a given upper bound. The considered tests are of likelihood ratio type. The main ideas, and the proofs, are based on sharp large deviation bounds. Finally, we illustrate the theoretical results by numerical simulations.Comment: Forthcoming in Stochastic Partial Differential Equations: Analysis and Computation

    Too good to be true: when overwhelming evidence fails to convince

    Get PDF
    Is it possible for a large sequence of measurements or observations, which support a hypothesis, to counterintuitively decrease our confidence? Can unanimous support be too good to be true? The assumption of independence is often made in good faith, however rarely is consideration given to whether a systemic failure has occurred. Taking this into account can cause certainty in a hypothesis to decrease as the evidence for it becomes apparently stronger. We perform a probabilistic Bayesian analysis of this effect with examples based on (i) archaeological evidence, (ii) weighing of legal evidence, and (iii) cryptographic primality testing. We find that even with surprisingly low systemic failure rates high confidence is very difficult to achieve and in particular we find that certain analyses of cryptographically-important numerical tests are highly optimistic, underestimating their false-negative rate by as much as a factor of 2802^{80}

    Methods for Population Adjustment with Limited Access to Individual Patient Data: A Review and Simulation Study

    Get PDF
    Population-adjusted indirect comparisons estimate treatment effects when access to individual patient data is limited and there are cross-trial differences in effect modifiers. Popular methods include matching-adjusted indirect comparison (MAIC) and simulated treatment comparison (STC). There is limited formal evaluation of these methods and whether they can be used to accurately compare treatments. Thus, we undertake a comprehensive simulation study to compare standard unadjusted indirect comparisons, MAIC and STC across 162 scenarios. This simulation study assumes that the trials are investigating survival outcomes and measure continuous covariates, with the log hazard ratio as the measure of effect. MAIC yields unbiased treatment effect estimates under no failures of assumptions. The typical usage of STC produces bias because it targets a conditional treatment effect where the target estimand should be a marginal treatment effect. The incompatibility of estimates in the indirect comparison leads to bias as the measure of effect is non-collapsible. Standard indirect comparisons are systematically biased, particularly under stronger covariate imbalance and interaction effects. Standard errors and coverage rates are often valid in MAIC but the robust sandwich variance estimator underestimates variability where effective sample sizes are small. Interval estimates for the standard indirect comparison are too narrow and STC suffers from bias-induced undercoverage. MAIC provides the most accurate estimates and, with lower degrees of covariate overlap, its bias reduction outweighs the loss in effective sample size and precision under no failures of assumptions. An important future objective is the development of an alternative formulation to STC that targets a marginal treatment effect.Comment: 73 pages (34 are supplementary appendices and references), 8 figures, 2 tables. Full article (following Round 4 of minor revisions). arXiv admin note: text overlap with arXiv:2008.0595
    corecore