14,715 research outputs found
Robust Dynamic Selection of Tested Modules in Software Testing for Maximizing Delivered Reliability
Software testing is aimed to improve the delivered reliability of the users.
Delivered reliability is the reliability of using the software after it is
delivered to the users. Usually the software consists of many modules. Thus,
the delivered reliability is dependent on the operational profile which
specifies how the users will use these modules as well as the defect number
remaining in each module. Therefore, a good testing policy should take the
operational profile into account and dynamically select tested modules
according to the current state of the software during the testing process. This
paper discusses how to dynamically select tested modules in order to maximize
delivered reliability by formulating the selection problem as a dynamic
programming problem. As the testing process is performed only once, risk must
be considered during the testing process, which is described by the tester's
utility function in this paper. Besides, since usually the tester has no
accurate estimate of the operational profile, by employing robust optimization
technique, we analysis the selection problem in the worst case, given the
uncertainty set of operational profile. By numerical examples, we show the
necessity of maximizing delivered reliability directly and using robust
optimization technique when the tester has no clear idea of the operational
profile. Moreover, it is shown that the risk averse behavior of the tester has
a major influence on the delivered reliability.Comment: 19 pages, 4 figure
Model checking embedded system designs
We survey the basic principles behind the application of model checking to controller verification and synthesis. A promising development is the area of guided model checking, in which the state space search strategy of the model checking algorithm can be influenced to visit more interesting sets of states first. In particular, we discuss how model checking can be combined with heuristic cost functions to guide search strategies. Finally, we list a number of current research developments, especially in the area of reachability analysis for optimal control and related issues
Bellman Error Based Feature Generation using Random Projections on Sparse Spaces
We address the problem of automatic generation of features for value function
approximation. Bellman Error Basis Functions (BEBFs) have been shown to improve
the error of policy evaluation with function approximation, with a convergence
rate similar to that of value iteration. We propose a simple, fast and robust
algorithm based on random projections to generate BEBFs for sparse feature
spaces. We provide a finite sample analysis of the proposed method, and prove
that projections logarithmic in the dimension of the original space are enough
to guarantee contraction in the error. Empirical results demonstrate the
strength of this method
Efficient Parallel Statistical Model Checking of Biochemical Networks
We consider the problem of verifying stochastic models of biochemical
networks against behavioral properties expressed in temporal logic terms. Exact
probabilistic verification approaches such as, for example, CSL/PCTL model
checking, are undermined by a huge computational demand which rule them out for
most real case studies. Less demanding approaches, such as statistical model
checking, estimate the likelihood that a property is satisfied by sampling
executions out of the stochastic model. We propose a methodology for
efficiently estimating the likelihood that a LTL property P holds of a
stochastic model of a biochemical network. As with other statistical
verification techniques, the methodology we propose uses a stochastic
simulation algorithm for generating execution samples, however there are three
key aspects that improve the efficiency: first, the sample generation is driven
by on-the-fly verification of P which results in optimal overall simulation
time. Second, the confidence interval estimation for the probability of P to
hold is based on an efficient variant of the Wilson method which ensures a
faster convergence. Third, the whole methodology is designed according to a
parallel fashion and a prototype software tool has been implemented that
performs the sampling/verification process in parallel over an HPC
architecture
Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models
This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP) models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART), to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007).
Multidimensional integration through Markovian sampling under steered function morphing: a physical guise from statistical mechanics
We present a computational strategy for the evaluation of multidimensional
integrals on hyper-rectangles based on Markovian stochastic exploration of the
integration domain while the integrand is being morphed by starting from an
initial appropriate profile. Thanks to an abstract reformulation of Jarzynski's
equality applied in stochastic thermodynamics to evaluate the free-energy
profiles along selected reaction coordinates via non-equilibrium
transformations, it is possible to cast the original integral into the
exponential average of the distribution of the pseudo-work (that we may term
"computational work") involved in doing the function morphing, which is
straightforwardly solved. Several tests illustrate the basic implementation of
the idea, and show its performance in terms of computational time, accuracy and
precision. The formulation for integrand functions with zeros and possible sign
changes is also presented. It will be stressed that our usage of Jarzynski's
equality shares similarities with a practice already known in statistics as
Annealed Importance Sampling (AIS), when applied to computation of the
normalizing constants of distributions. In a sense, here we dress the AIS with
its "physical" counterpart borrowed from statistical mechanics.Comment: 3 figures Supplementary Material (pdf file named "JEMDI_SI.pdf"
CHARDA: Causal Hybrid Automata Recovery via Dynamic Analysis
We propose and evaluate a new technique for learning hybrid automata
automatically by observing the runtime behavior of a dynamical system. Working
from a sequence of continuous state values and predicates about the
environment, CHARDA recovers the distinct dynamic modes, learns a model for
each mode from a given set of templates, and postulates causal guard conditions
which trigger transitions between modes. Our main contribution is the use of
information-theoretic measures (1)~as a cost function for data segmentation and
model selection to penalize over-fitting and (2)~to determine the likely causes
of each transition. CHARDA is easily extended with different classes of model
templates, fitting methods, or predicates. In our experiments on a complex
videogame character, CHARDA successfully discovers a reasonable
over-approximation of the character's true behaviors. Our results also compare
favorably against recent work in automatically learning probabilistic timed
automata in an aircraft domain: CHARDA exactly learns the modes of these
simpler automata.Comment: 7 pages, 2 figures. Accepted for IJCAI 201
- ā¦