42,129 research outputs found
Exact boundaries in sequential testing for phase-type distributions
Consider Wald's sequential probability ratio test for deciding whether a sequence of independent and identically distributed observations comes from a specified phase-type distribution or from an exponentially tilted alternative distribution. Exact decision boundaries for given type-I and type-II errors are derived by establishing a link with ruin theory. Information on the mean sample size of the test can be retrieved as well. The approach relies on the use of matrix-valued scale functions associated with a certain one-sided Markov additive process. By suitable transformations, the results also apply to other types of distributions, including some distributions with regularly varying tails
Comparison of Bayesian and frequentist group-sequential clinical trial designs
Background: There is a growing interest in the use of Bayesian adaptive designs in late-phase clinical trials. This
includes the use of stopping rules based on Bayesian analyses in which the frequentist type I error rate is controlled as
in frequentist group-sequential designs.
Methods: This paper presents a practical comparison of Bayesian and frequentist group-sequential tests. Focussing
on the setting in which data can be summarised by normally distributed test statistics, we evaluate and compare
boundary values and operating characteristics.
Results: Although Bayesian and frequentist group-sequential approaches are based on fundamentally different
paradigms, in a single arm trial or two-arm comparative trial with a prior distribution specified for the treatment
difference, Bayesian and frequentist group-sequential tests can have identical stopping rules if particular critical values
with which the posterior probability is compared or particular spending function values are chosen. If the Bayesian
critical values at different looks are restricted to be equal, O’Brien and Fleming’s design corresponds to a Bayesian
design with an exceptionally informative negative prior, Pocock’s design to a Bayesian design with a non-informative
prior and frequentist designs with a linear alpha spending function are very similar to Bayesian designs with slightly
informative priors.
This contrasts with the setting of a comparative trial with independent prior distributions specified for treatment
effects in different groups. In this case Bayesian and frequentist group-sequential tests cannot have the same
stopping rule as the Bayesian stopping rule depends on the observed means in the two groups and not just on their
difference. In this setting the Bayesian test can only be guaranteed to control the type I error for a specified range of
values of the control group treatment effect.
Conclusions: Comparison of frequentist and Bayesian designs can encourage careful thought about design
parameters and help to ensure appropriate design choices are made
CompILE: Compositional Imitation Learning and Execution
We introduce Compositional Imitation Learning and Execution (CompILE): a
framework for learning reusable, variable-length segments of
hierarchically-structured behavior from demonstration data. CompILE uses a
novel unsupervised, fully-differentiable sequence segmentation module to learn
latent encodings of sequential data that can be re-composed and executed to
perform new tasks. Once trained, our model generalizes to sequences of longer
length and from environment instances not seen during training. We evaluate
CompILE in a challenging 2D multi-task environment and a continuous control
task, and show that it can find correct task boundaries and event encodings in
an unsupervised manner. Latent codes and associated behavior policies
discovered by CompILE can be used by a hierarchical agent, where the high-level
policy selects actions in the latent code space, and the low-level,
task-specific policies are simply the learned decoders. We found that our
CompILE-based agent could learn given only sparse rewards, where agents without
task-specific policies struggle.Comment: ICML (2019
Unified Approaches for Frequentist and Bayesian Methods in Two-Sample Clinical Trials with Binary Endpoints
Two opposing paradigms, analyses via frequentist or Bayesian methods, dominate the statistical literature. Most commonly, frequentist approaches have been used to design and analyze clinical trials, though Bayesian techniques are becoming increasingly popular. However, these two paradigms can generate divergent results even in analyses of the same trial data, which may harm the scientific interpretability of the trial. Therefore, it is crucial to harmonize analyses under each approach. In this dissertation, novel unified approaches for one-sided frequentist and Bayesian hypothesis testing problems comparing two proportions in fixed-sample and group-sequential clinical trials are proposed. When a frequentist design with desired type I and II error rates are given, the unification is achieved by deriving specific Bayesian decision thresholds and sample sizes. Similarly, when a Bayesian design is given, the unification is achieved by deriving corresponding frequentist characteristics. In addition, theoretical methods to determine the Bayesian decision threshold, sample size and power are provided. Numerical results show that the unified approach can yield the same type I and II error rates for frequentist and Bayesian hypothesis tests through a numerical study. Further, detailed evaluations suggest that Bayesian priors specifications, allocation ratios, number of analyses can affect the resulting Bayesian sample sizes and decision thresholds. Overall, the unified approach can be adopted into the current clinical trial setting and is helpful to make trial results translatable between frequentist and Bayesian methods
Adaptive Sensing for Estimation of Structured Sparse Signals
In many practical settings one can sequentially and adaptively guide the
collection of future data, based on information extracted from data collected
previously. These sequential data collection procedures are known by different
names, such as sequential experimental design, active learning or adaptive
sensing/sampling. The intricate relation between data analysis and acquisition
in adaptive sensing paradigms can be extremely powerful, and often allows for
reliable signal estimation and detection in situations where non-adaptive
sensing would fail dramatically.
In this work we investigate the problem of estimating the support of a
structured sparse signal from coordinate-wise observations under the adaptive
sensing paradigm. We present a general procedure for support set estimation
that is optimal in a variety of cases and shows that through the use of
adaptive sensing one can: (i) mitigate the effect of observation noise when
compared to non-adaptive sensing and, (ii) capitalize on structural information
to a much larger extent than possible with non-adaptive sensing. In addition to
a general procedure to perform adaptive sensing in structured settings we
present both performance upper bounds, and corresponding lower bounds for both
sensing paradigms
Substructure and Boundary Modeling for Continuous Action Recognition
This paper introduces a probabilistic graphical model for continuous action
recognition with two novel components: substructure transition model and
discriminative boundary model. The first component encodes the sparse and
global temporal transition prior between action primitives in state-space model
to handle the large spatial-temporal variations within an action class. The
second component enforces the action duration constraint in a discriminative
way to locate the transition boundaries between actions more accurately. The
two components are integrated into a unified graphical structure to enable
effective training and inference. Our comprehensive experimental results on
both public and in-house datasets show that, with the capability to incorporate
additional information that had not been explicitly or efficiently modeled by
previous methods, our proposed algorithm achieved significantly improved
performance for continuous action recognition.Comment: Detailed version of the CVPR 2012 paper. 15 pages, 6 figure
Importance sampling large deviations in nonequilibrium steady states. I
Large deviation functions contain information on the stability and response
of systems driven into nonequilibrium steady states, and in such a way are
similar to free energies for systems at equilibrium. As with equilibrium free
energies, evaluating large deviation functions numerically for all but the
simplest systems is difficult, because by construction they depend on
exponentially rare events. In this first paper of a series, we evaluate
different trajectory-based sampling methods capable of computing large
deviation functions of time integrated observables within nonequilibrium steady
states. We illustrate some convergence criteria and best practices using a
number of different models, including a biased Brownian walker, a driven
lattice gas, and a model of self-assembly. We show how two popular methods for
sampling trajectory ensembles, transition path sampling and diffusion Monte
Carlo, suffer from exponentially diverging correlations in trajectory space as
a function of the bias parameter when estimating large deviation functions.
Improving the efficiencies of these algorithms requires introducing guiding
functions for the trajectories.Comment: Published in JC
- …