340 research outputs found
Automatic Test Generation for Space
The European Space Agency (ESA) uses an engine to perform tests in the Ground
Segment infrastructure, specially the Operational Simulator. This engine uses
many different tools to ensure the development of regression testing
infrastructure and these tests perform black-box testing to the C++ simulator
implementation. VST (VisionSpace Technologies) is one of the companies that
provides these services to ESA and they need a tool to infer automatically
tests from the existing C++ code, instead of writing manually scripts to
perform tests. With this motivation in mind, this paper explores automatic
testing approaches and tools in order to propose a system that satisfies VST
needs
Automatic Functional Testing of GUIs
Functional testing of GUIs can be automated using a test oracle derived from the GUIâs specification and from a restricted set of
randomised test data. As test data, a set of randomly distorted test objects seems to work well, especially starting as we do with a
âperfectâ object and then distorting this more and more as the test progresses. The number of test cases needed seems to be much
smaller than that reported in other random testing papers. More work is needed to see if the approach is generally applicable: if
so, the test engineer can spend his time writing GUI specifications at a high level of abstraction, rather than hand-generating test
cases
Grey-box Testing and Verification of Java/JML
International audienceWe present in this paper the application of constraint solving techniques to the validation and automated test cases generation for Java programs, annotated with JML specifications. The Java/JML code is translated into a constraint representation based on a subset of the set-theory, which is well-suited for modelling object-oriented programs. Symbolic code execution techniques can then be applied to produce test cases, using classical structural test selection criteria, or to detect possible runtime errors, and non-conformances between the Java code and its embedded JML model
The 14th Overture Workshop: Towards Analytical Tool Chains
This report contains the proceedings from the 14th Overture workshop organized in connection with the Formal Methods 2016 symposium. This includes nine papers describing different technological progress in relation to the Overture/VDM tool support and its connection with other tools such as Crescendo, Symphony, INTO-CPS, TASTE and ViennaTalk
SPEST - A TOOL FOR SPECIFICATION-BASED TESTING
This thesis presents a tool for SPEcification based teSTing (SPEST). SPEST is designed to use well known practices for automated black-box testing to reduce the burden of testing on developers. The tool uses a simple formal specification language to generate highly-readable unit tests that embody best practices for thorough software testing. Because the specification language used to generate the assertions about the code can be compiled, it can also be used to ensure that documentation describing the code is maintained during development and refactoring.
The utility and effectiveness of SPEST were validated through several exper- iments conducted with students in undergraduate software engineering classes. The first experiment compared the understandability and efficiency of SPEST generated tests against student written tests based on the Java Modeling Lan- guage (JML)[25] specifications. JML is a widely used language for behavior program specification. A second experiment evaluated readability through a sur- vey comparing SPEST generated tests against tests written by well established software developers. The results from the experiments showed that SPESTâs specification language is at least understandable as JML, SPESTâs specification language is more readable than JML, and strongly suggest that SPEST is capable of reducing the effort required to produce effective tests
Recommended from our members
Using Metamorphic Testing at Runtime to Detect Defects in Applications without Test Oracles
First, we will present an approach called Automated Metamorphic System Testing. This will involve automating system-level metamorphic testing by treating the application as a black box and checking that the metamorphic properties of the entire application hold after execution. This will allow for metamorphic testing to be conducted in the production environment without affecting the user, and will not require the tester to have access to the source code. The tests do not require an oracle upon their creation; rather, the metamorphic properties act as built-in test oracles. We will also introduce an implementation framework called Amsterdam. Second, we will present a new type of testing called Metamorphic Runtime Checking. This involves the execution of metamorphic tests from within the application, i.e., the application launches its own tests, within its current context. The tests execute within the application's current state, and in particular check a function's metamorphic properties. We will also present a system called Columbus that supports the execution of the Metamorphic Runtime Checking from within the context of the running application. Like Amsterdam, it will conduct the tests with acceptable performance overhead, and will ensure that the execution of the tests does not affect the state of the original application process from the users' perspective; however, the implementation of Columbus will be more challenging in that it will require more sophisticated mechanisms for conducting the tests without pre-empting the rest of the application, and for comparing the results which may conceivably be in different processes or environments. Third, we will describe a set of metamorphic testing guidelines that can be followed to assist in the formulation and specification of metamorphic properties that can be used with the above approaches. These will categorize the different types of properties exhibited by many applications in the domain of machine learning and data mining in particular (as a result of the types of applications we will investigate), but we will demonstrate that they are also generalizable to other domains as well. This set of guidelines will also correlate to the different types of defects that we expect the approaches will be able to find
Praspel: Contract-Driven Testing for PHP using Realistic Domains
We present an integrated contract-based testing framework for PHP. It relies on a behavioral interface specification language called Praspel, for "PHP Realistic Annotation and Specification Language". Using Praspel developers can easily annotate their PHP scripts with formal contracts, namely class invariants, and method pre- and postconditions. These contracts describe assertions either by predicates or by assigning realistic domains to data. Realistic domains introduce types in PHP and describe complex structures frequently encountered in applications, such as email addresses or SQL queries. Realistic domains display two properties: predicability, which allows to check if a data belongs to a given realistic domain, and samplability, which allows to generate valid data. This paper introduces coverage criteria dedicated to contracts, designed to exhibit relevant behaviors of the annotated methods. Test data are then computed to satisfy these coverage criteria, by using dedicated data generators for complex realistic domains, such as arrays or strings. This framework has been implemented and disseminated within the PHP community, which gave us feedback on their usage of the tool and the relevance of this integrated process with respect to their practice of manual testing
Putting the Semantics into Semantic Versioning
The long-standing aspiration for software reuse has made astonishing strides
in the past few years. Many modern software development ecosystems now come
with rich sets of publicly-available components contributed by the community.
Downstream developers can leverage these upstream components, boosting their
productivity.
However, components evolve at their own pace. This imposes obligations on and
yields benefits for downstream developers, especially since changes can be
breaking, requiring additional downstream work to adapt to. Upgrading too late
leaves downstream vulnerable to security issues and missing out on useful
improvements; upgrading too early results in excess work. Semantic versioning
has been proposed as an elegant mechanism to communicate levels of
compatibility, enabling downstream developers to automate dependency upgrades.
While it is questionable whether a version number can adequately characterize
version compatibility in general, we argue that developers would greatly
benefit from tools such as semantic version calculators to help them upgrade
safely. The time is now for the research community to develop such tools: large
component ecosystems exist and are accessible, component interactions have
become observable through automated builds, and recent advances in program
analysis make the development of relevant tools feasible. In particular,
contracts (both traditional and lightweight) are a promising input to semantic
versioning calculators, which can suggest whether an upgrade is likely to be
safe.Comment: to be published as Onward! Essays 202
- âŠ