35,518 research outputs found
Recommended from our members
Testing based on the RELAY model of error detection
RELAY, a model for error detection, defines revealing conditions that guarantee that a fault originates an error during execution and that the error transfers through computations and data flow until it is revealed. This model of error detection provides a fault-based criterion for test data selection. The model is applied by choosing a fault classification, instantiating the conditions for the classes of faults, and applying them to the program being tested. Such an application guarantees the detection of errors caused by any fault of the chosen classes. As a formal mode of error detection, RELAY provides the basis for an automated testing tool. This paper presents the concepts behind RELAY, describes why it is better than other fault-based testing criteria, and discusses how RELAY could be used as the foundation for a testing system
A Meta-Learning Approach to One-Step Active Learning
We consider the problem of learning when obtaining the training labels is
costly, which is usually tackled in the literature using active-learning
techniques. These approaches provide strategies to choose the examples to label
before or during training. These strategies are usually based on heuristics or
even theoretical measures, but are not learned as they are directly used during
training. We design a model which aims at \textit{learning active-learning
strategies} using a meta-learning setting. More specifically, we consider a
pool-based setting, where the system observes all the examples of the dataset
of a problem and has to choose the subset of examples to label in a single
shot. Experiments show encouraging results
Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
This paper studies the multiplicity-correction effect of standard Bayesian
variable-selection priors in linear regression. Our first goal is to clarify
when, and how, multiplicity correction happens automatically in Bayesian
analysis, and to distinguish this correction from the Bayesian Ockham's-razor
effect. Our second goal is to contrast empirical-Bayes and fully Bayesian
approaches to variable selection through examples, theoretical results and
simulations. Considerable differences between the two approaches are found. In
particular, we prove a theorem that characterizes a surprising aymptotic
discrepancy between fully Bayes and empirical Bayes. This discrepancy arises
from a different source than the failure to account for hyperparameter
uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when
the empirical-Bayes estimate converges asymptotically to the true
variable-inclusion probability, the potential for a serious difference remains.Comment: Published in at http://dx.doi.org/10.1214/10-AOS792 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …