56,887 research outputs found
Improved techniques for software testing based on Markov chain usage models
In statistical testing of software all possible uses of the software, at some level of abstraction, are represented by a statistical model wherein each possible use of the software has an associated probability of occurrence [16]. Test cases are drawn from the sample population of possible uses according to the sample distribution and run against the software under test. Various statistics of interest, such as the estimated failure rate and mean time to failure of the software are computed. The testing performed is evaluated relative to the population of uses to determine whether or not to stop testing.
The model used to represent use of the software in this work is a finite state, time homogeneous, discrete parameter, irreducible Markov chain [10]. In a Markov chain usage model the states of use of the software are represented as states in the Markov chain [1,18, 21,22]. User actions are represented as state transitions in the Markov chain. The probability of a user performing a certain action given that the software is in a particular state of use is represented by the associated transition probability in the Markov chain. The usage model always contains two special states, the (Invoke) state and (Terminate) state. The (Invoke) state represents the software prior to invocation and the (Terminate) state represents the software after execution has ceased. All test cases start from the (Invoke,) state and end in the (Terminate) state.
Given a Markov chain based usage model it is possible to analytically compute a number of statistics useful for validation of the model, test planning, test monitoring, and evaluation of the software under test. Example statistics include the expected test case length and associated variance, the probability of a state or arc appearing in a test case, the long run probability of the software being in a certain state of use [22]. Test cases are randomly generated from the usage model, i.e., randomly sampled based on the use distribution. These test cases are run against the software and estimates of reliability are computed
A Simpler and More Direct Derivation of System Reliability Using Markov Chain Usage Models
Markov chain usage-based statistical testing has been around for more than two decades, and proved sound and effective in providing audit trails of evidence in certifying software-intensive systems. The system end-to-end reliability is derived analytically in closed form, following an arc-based Bayesian model. System reliability is represented by an important statistic called single use reliability, and defined as the probability of a randomly selected use being successful. This paper reviews the analytical derivation of the single use reliability mean, and proposes a simpler, faster, and more direct way to compute the expected value that renders an intuitive explanation. The new derivation is illustrated with two examples
On A Simpler and Faster Derivation of Single Use Reliability Mean and Variance for Model-Based Statistical Testing
Markov chain usage-based statistical testing has proved sound and effective in providing audit trails of evidence in certifying software-intensive systems. The system end-toend reliability is derived analytically in closed form, following an arc-based Bayesian model. System reliability is represented by an important statistic called single use reliability, and defined as the probability of a randomly selected use being successful. This paper continues our earlier work on a simpler and faster derivation of the single use reliability mean, and proposes a new derivation of the single use reliability variance by applying a well-known theorem and eliminating the need to compute the second moments of arc
failure probabilities. Our new results complete a new analysis that could be shown to be simpler, faster, and more direct while also rendering a more intuitive explanation. Our new
theory is illustrated with three simple Markov chain usage models with manual derivations and experimental results
A Markov Chain state transition approach to establishing critical phases for AUV reliability
The deployment of complex autonomous underwater platforms for marine science comprises a series of sequential steps. Each step is critical to the success of the mission. In this paper we present a state transition approach, in the form of a Markov chain, which models the sequence of steps from pre-launch to operation to recovery. The aim is to identify the states and state transitions that present higher risk to the vehicle and hence to the mission, based on evidence and judgment. Developing a Markov chain consists of two separate tasks. The first defines the structure that encodes the sequence of events. The second task assigns probabilities to each possible transition. Our model comprises eleven discrete states, and includes distance-dependent underway survival statistics. The integration of the Markov model with underway survival statistics allows us to quantify the likelihood of success during each state and transition and consequently the likelihood of achieving the desired mission goals. To illustrate this generic process, the fault history of the Autosub3 autonomous underwater vehicle provides the information for different phases of operation. The method proposed here adds more detail to previous analyses; faults are discriminated according to the phase of the mission in which they took place
Computational statistics using the Bayesian Inference Engine
This paper introduces the Bayesian Inference Engine (BIE), a general
parallel, optimised software package for parameter inference and model
selection. This package is motivated by the analysis needs of modern
astronomical surveys and the need to organise and reuse expensive derived data.
The BIE is the first platform for computational statistics designed explicitly
to enable Bayesian update and model comparison for astronomical problems.
Bayesian update is based on the representation of high-dimensional posterior
distributions using metric-ball-tree based kernel density estimation. Among its
algorithmic offerings, the BIE emphasises hybrid tempered MCMC schemes that
robustly sample multimodal posterior distributions in high-dimensional
parameter spaces. Moreover, the BIE is implements a full persistence or
serialisation system that stores the full byte-level image of the running
inference and previously characterised posterior distributions for later use.
Two new algorithms to compute the marginal likelihood from the posterior
distribution, developed for and implemented in the BIE, enable model comparison
for complex models and data sets. Finally, the BIE was designed to be a
collaborative platform for applying Bayesian methodology to astronomy. It
includes an extensible object-oriented and easily extended framework that
implements every aspect of the Bayesian inference. By providing a variety of
statistical algorithms for all phases of the inference problem, a scientist may
explore a variety of approaches with a single model and data implementation.
Additional technical details and download details are available from
http://www.astro.umass.edu/bie. The BIE is distributed under the GNU GPL.Comment: Resubmitted version. Additional technical details and download
details are available from http://www.astro.umass.edu/bie. The BIE is
distributed under the GNU GP
Exact goodness-of-fit testing for the Ising model
The Ising model is one of the simplest and most famous models of interacting
systems. It was originally proposed to model ferromagnetic interactions in
statistical physics and is now widely used to model spatial processes in many
areas such as ecology, sociology, and genetics, usually without testing its
goodness of fit. Here, we propose various test statistics and an exact
goodness-of-fit test for the finite-lattice Ising model. The theory of Markov
bases has been developed in algebraic statistics for exact goodness-of-fit
testing using a Monte Carlo approach. However, finding a Markov basis is often
computationally intractable. Thus, we develop a Monte Carlo method for exact
goodness-of-fit testing for the Ising model which avoids computing a Markov
basis and also leads to a better connectivity of the Markov chain and hence to
a faster convergence. We show how this method can be applied to analyze the
spatial organization of receptors on the cell membrane.Comment: 20 page
- …