9 research outputs found
Recommended from our members
Feedback-Based Random Test Generator for TSTL
Software testing is the process of evaluating the accuracy and performance of software, and automated software testing allows programmers to develop software more efficiently by decreasing testing costs. We compared two advanced random test generators, a Feedback-Directed Random Test Generator (FDR) and a Feedback-Controlled Random Test Generator (FCR), for an automated software testing tool in Python 2.x, the Template Scripting Testing Language (TSTL).
An FDR generates test inputs incrementally. Feedback from previous trials is used to generate new inputs. As each test input is executed, the software properties are assessed to determine if there is any value. Because of this process of gradually generating new tests, the FDR avoids redundant and illegal test inputs commonly produced by traditional random test generators. An FCR employs a different feedback technique. It controls the feedback to produce varied test inputs using multiple input containers. In our experiments, we compared the performance of our test generators with TSTL’s generator in terms of coverage, time-efficiency, and error-detection capability
Model-Based Scenario Testing and Model Checking with Applications in the Railway Domain
This thesis introduces Timed Moore Automata, a specification formalism, which extends the classical Moore Automata by adding the concept of abstract timers without concrete delay time values, which can be started and reset, and which can change their state from running to elapsed. The formalism is used in real-world railway domain applications, and algorithms for the automated test data generation and explicit model checking of Timed Moore Automata models are presented. In addition, this thesis deals with test data generation for larger scale test models using standardized modeling formalisms. An existing framework for the automated test data generation is presented, and its underlying work-flow is extended and modified in order to allow user interaction and guidance within the generation process. As opposed to specifying generation constraints for entire test scenarios, the modified work flow then allows for an iterative approach to elaborating and formalizing test generation goals
Development and evaluation of a framework for semantic validation of performance metrics for the IBM InfoSphere Optim Performance Manager
Validation is an important field in the software development process. It helps to increase the software quality but is also very expensive and time consuming. To decrease the costs approaches to automate the validation process are necessary. In this thesis a framework is developed, which does not need user interaction to validate the IBM InfoSphere Optim Performance Manager semantically. It is able to validate values of different behavioral patterns. It covers deterministic, semi-deterministic and non-deterministic behavior. The thesis describes the process of the development of the framework. It introduces available approaches and examines them with regard to the suitability for the framework. The found solution is described in theory and a prototype is implemented to apply the solution to praxis. This prototype is evaluated on the latest version of the IBM InfoSphere Optim Performance Manager