8,788 research outputs found
A Semantic Framework for Test Coverage
Since testing is inherently incomplete, test selection is of vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly. This paper introduces a semantic approach to test coverage. Our starting point is a weighted fault model, which assigns a weight to each potential error in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour.We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of weighted fault models as fault automata. They are based on existing and novel optimization\ud
problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol
A Semantic Framework for Test Coverage (Extended Version)
Since testing is inherently incomplete, test selection is of vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly. This paper introduces a semantic approach to test coverage. Our starting point is a weighted fault model, which assigns a weight to each potential error in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour.We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of weighted fault models as fault automata. They are based on existing and novel optimization\ud
problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol
Coverage statistics for sequence census methods
Background: We study the statistical properties of fragment coverage in
genome sequencing experiments. In an extension of the classic Lander-Waterman
model, we consider the effect of the length distribution of fragments. We also
introduce the notion of the shape of a coverage function, which can be used to
detect abberations in coverage. The probability theory underlying these
problems is essential for constructing models of current high-throughput
sequencing experiments, where both sample preparation protocols and sequencing
technology particulars can affect fragment length distributions.
Results: We show that regardless of fragment length distribution and under
the mild assumption that fragment start sites are Poisson distributed, the
fragments produced in a sequencing experiment can be viewed as resulting from a
two-dimensional spatial Poisson process. We then study the jump skeleton of the
the coverage function, and show that the induced trees are Galton-Watson trees
whose parameters can be computed.
Conclusions: Our results extend standard analyses of shotgun sequencing that
focus on coverage statistics at individual sites, and provide a null model for
detecting deviations from random coverage in high-throughput sequence census
based experiments. By focusing on fragments, we are also led to a new approach
for visualizing sequencing data that should be of independent interest.Comment: 10 pages, 4 figure
Fast, exact CMB power spectrum estimation for a certain class of observational strategies
We describe a class of observational strategies for probing the anisotropies
in the cosmic microwave background (CMB) where the instrument scans on rings
which can be combined into an n-torus, the {\em ring torus}. This class has the
remarkable property that it allows exact maximum likelihood power spectrum
estimation in of order operations (if the size of the data set is )
under circumstances which would previously have made this analysis intractable:
correlated receiver noise, arbitrary asymmetric beam shapes and far side lobes,
non-uniform distribution of integration time on the sky and partial sky
coverage. This ease of computation gives us an important theoretical tool for
understanding the impact of instrumental effects on CMB observables and hence
for the design and analysis of the CMB observations of the future. There are
members of this class which closely approximate the MAP and Planck satellite
missions. We present a numerical example where we apply our ring torus methods
to a simulated data set from a CMB mission covering a 20 degree patch on the
sky to compute the maximum likelihood estimate of the power spectrum
with unprecedented efficiency.Comment: RevTeX, 14 pages, 5 figures. A full resolution version of Figure 1
and additional materials are at http://feynman.princeton.edu/~bwandelt/RT
- …