167,157 research outputs found
Software verification plan for GCS
This verification plan is written as part of an experiment designed to study the fundamental characteristics of the software failure process. The experiment will be conducted using several implementations of software that were produced according to industry-standard guidelines, namely the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, Software Consideration in Airborne Systems and Equipment Certification, for the development of flight software. This plan fulfills the DO-178A requirements for providing instructions on the testing of each implementation of software. The plan details the verification activities to be performed at each phase in the development process, contains a step by step description of the testing procedures, and discusses all of the tools used throughout the verification process
Development of a framework for automated systematic testing of safety-critical embedded systems
āThis material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." āCopyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.āIn this paper we introduce the development of a framework for testing safety-critical embedded systems based on the concepts of model-based testing. In model-based testing the test cases are derived from a model of the system under test. In our approach the model is an automaton model that is automatically extracted from the C-source code of the system under test. Beside random test data generation the test case generation uses formal methods, in detail model checking techniques. To find appropriate test cases we use the requirements defined in the system specification. To cover further execution paths we developed an additional, to our best knowledge, novel method based on special structural coverage criteria. We present preliminary results on the model extraction using a concrete industrial case study from the automotive domain
JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction
Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Javaās reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the classās method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally
Recommended from our members
Using formal methods to support testing
Formal methods and testing are two important approaches that assist in the development of high quality software. While traditionally these approaches have been seen as rivals, in recent
years a new consensus has developed in which they are seen as complementary. This article reviews the state of the art regarding ways in which the presence of a formal specification can be used to assist testing
Toward an accurate mass function for precision cosmology
Cosmological surveys aim to use the evolution of the abundance of galaxy
clusters to accurately constrain the cosmological model. In the context of
LCDM, we show that it is possible to achieve the required percent level
accuracy in the halo mass function with gravity-only cosmological simulations,
and we provide simulation start and run parameter guidelines for doing so. Some
previous works have had sufficient statistical precision, but lacked robust
verification of absolute accuracy. Convergence tests of the mass function with,
for example, simulation start redshift can exhibit false convergence of the
mass function due to counteracting errors, potentially misleading one to infer
overly optimistic estimations of simulation accuracy. Percent level accuracy is
possible if initial condition particle mapping uses second order Lagrangian
Perturbation Theory, and if the start epoch is between 10 and 50 expansion
factors before the epoch of halo formation of interest. The mass function for
halos with fewer than ~1000 particles is highly sensitive to simulation
parameters and start redshift, implying a practical minimum mass resolution
limit due to mass discreteness. The narrow range in converged start redshift
suggests that it is not presently possible for a single simulation to capture
accurately the cluster mass function while also starting early enough to model
accurately the numbers of reionisation era galaxies, whose baryon feedback
processes may affect later cluster properties. Ultimately, to fully exploit
current and future cosmological surveys will require accurate modeling of
baryon physics and observable properties, a formidable challenge for which
accurate gravity-only simulations are just an initial step.Comment: revised in response to referee suggestions, MNRAS accepte
Statistical Phylogenetic Tree Analysis Using Differences of Means
We propose a statistical method to test whether two phylogenetic trees with
given alignments are significantly incongruent. Our method compares the two
distributions of phylogenetic trees given by the input alignments, instead of
comparing point estimations of trees. This statistical approach can be applied
to gene tree analysis for example, detecting unusual events in genome evolution
such as horizontal gene transfer and reshuffling. Our method uses difference of
means to compare two distributions of trees, after embedding trees in a vector
space. Bootstrapping alignment columns can then be applied to obtain p-values.
To compute distances between means, we employ a "kernel trick" which speeds up
distance calculations when trees are embedded in a high-dimensional feature
space, e.g. splits or quartets feature space. In this pilot study, first we
test our statistical method's ability to distinguish between sets of gene trees
generated under coalescence models with species trees of varying dissimilarity.
We follow our simulation results with applications to various data sets of
gophers and lice, grasses and their endophytes, and different fungal genes from
the same genome. A companion toolkit, {\tt Phylotree}, is provided to
facilitate computational experiments.Comment: 17 pages, 6 figure
- ā¦