655,107 research outputs found
Software component testing : a standard and the effectiveness of techniques
This portfolio comprises two projects linked by the theme of software component testing, which is also
often referred to as module or unit testing. One project covers its standardisation, while the other
considers the analysis and evaluation of the application of selected testing techniques to an existing
avionics system. The evaluation is based on empirical data obtained from fault reports relating to the
avionics system.
The standardisation project is based on the development of the BC BSI Software Component Testing
Standard and the BCS/BSI Glossary of terms used in software testing, which are both included in the
portfolio. The papers included for this project consider both those issues concerned with the adopted
development process and the resolution of technical matters concerning the definition of the testing
techniques and their associated measures.
The test effectiveness project documents a retrospective analysis of an operational avionics system to
determine the relative effectiveness of several software component testing techniques. The methodology
differs from that used in other test effectiveness experiments in that it considers every possible set of
inputs that are required to satisfy a testing technique rather than arbitrarily chosen values from within
this set. The three papers present the experimental methodology used, intermediate results from a failure
analysis of the studied system, and the test effectiveness results for ten testing techniques, definitions for
which were taken from the BCS BSI Software Component Testing Standard.
The creation of the two standards has filled a gap in both the national and international software testing
standards arenas. Their production required an in-depth knowledge of software component testing
techniques, the identification and use of a development process, and the negotiation of the
standardisation process at a national level. The knowledge gained during this process has been
disseminated by the author in the papers included as part of this portfolio. The investigation of test
effectiveness has introduced a new methodology for determining the test effectiveness of software
component testing techniques by means of a retrospective analysis and so provided a new set of data that
can be added to the body of empirical data on software component testing effectiveness
Testing Component-Based Systems Using FSMs
No matter which tools, techniques, and methodologies are used for software development, it remains an error-prone process. Nevertheless, changing such important constituents of the software process surely has an effect on the types of faults inherent in the developed software. For instance, some types of faults are typical for structured development, whereas others are typical for object-oriented development.
This chapter explores the question of whether component-based software requires new testing techniques, and proposes an integrated testing technique. This technique integrates various tasks during testing component-based software: whiteand black-box testing of the main component (i.e., the top level component controlling the other components), black-box testing of components, black-box testing of the middleware and integration testing of the main component with other components. Benefits of this technique are shown using a real-world example: the technique is automatable and applicable to existing component-based software
A Fault Taxonomy for Component-Based Software
AbstractComponent technology is increasingly used to develop modular, configurable, and reusable systems. The problem of design and implement component-based systems is addressed by many models, methodologies, tools, and frameworks. On the contrary, analysis and test are not adequately supported yet. In general, a coherent fault taxonomy is a key starting point for providing techniques and methods for assessing the quality of software and in particular of component-based systems. This paper proposes a fault taxonomy to be used to develop and evaluate testing and analysis techniques for component-based software
Quality Research by Using Performance Evaluation Metrics for Software Systems and Components
Software performance and evaluation have four basic needs: (1) well-defined performance testing strategy, requirements, and focuses, (2) correct and effective performance evaluation models, (3) well-defined performance metrics, and (4) cost-effective performance testing and evaluation tools and techniques. This chapter first introduced a performance test process and discusses the performance testing objectives and focus areas. Then, it summarized the basic challenges and issues on performance testing and evaluation of component based programs and components. Next, this chapter presented different types of performance metrics for software components and systems, including processing speed, utilization, throughput, reliability, availability, and scalability metrics. Most of the performance metrics covered here can be considered as the application of existing metrics to software components. New performance metrics are needed to support the performance evaluation of component based programs.metrics, software performance, testing, evaluation, reliability, scalability
COLLABORATIVE TESTING ACROSS SHARED SOFTWARE COMPONENTS
Large component-based systems are often built from many of the same
components. As individual component-based software systems are
developed, tested and maintained, these shared components are
repeatedly manipulated. As a result there are often significant
overlaps and synergies across and among the different test efforts
of different component-based systems. However, in practice, testers of
different systems rarely collaborate, taking a test-all-by-yourself
approach. As a result, redundant effort is spent testing common
components, and important information that could be used to improve
testing quality is lost.
The goal of this research is to demonstrate that, if done properly,
testers of shared software components can save effort by avoiding
redundant work, and can improve the test effectiveness for each
component as well as for each component-based software system by using
information obtained when testing across multiple components. To
achieve this goal I have developed collaborative testing techniques
and tools for developers and testers of component-based systems with
shared components, applied the techniques to subject systems, and evaluated
the cost and effectiveness of applying the techniques.
The dissertation research is organized in three parts. First, I
investigated current testing practices for component-based software
systems to find the testing overlap and synergy we conjectured exists.
Second, I designed and implemented infrastructure and related tools to
facilitate communication and data sharing between testers. Third, I
designed two testing processes to implement different collaborative
testing algorithms and applied them to large actively developed
software systems.
This dissertation has shown the benefits of collaborative testing
across component developers who share their components. With
collaborative testing, researchers can design algorithms and tools to
support collaboration processes, achieve better efficiency in testing
configurations, and discover inter-component compatibility faults
within a minimal time window after they are introduced
PRF: A Framework for Building Automatic Program Repair Prototypes for JVM-Based Languages
PRF is a Java-based framework that allows researchers to build prototypes of
test-based generate-and-validate automatic program repair techniques for JVM
languages by simply extending it with their patch generation plugins. The
framework also provides other useful components for constructing automatic
program repair tools, e.g., a fault localization component that provides
spectrum-based fault localization information at different levels of
granularity, a configurable and safe patch validation component that is 11+X
faster than vanilla testing, and a customizable post-processing component to
generate fix reports. A demo video of PRF is available at
https://bit.ly/3ehduSS.Comment: Proceedings of the 28th ACM Joint European Software Engineering
Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE
'20
Towards Testing and Analysis of Systems that Use Serialization
AbstractObject serialization facilitates the flattening of structured objects into byte streams and is therefore important for all component-based applications that strongly rely on data-exchange among components. Unfortunately, implementing and controlling the serialization mechanisms may expose the software to subtle faults. This paper paves the way towards testing and analysis techniques specifically tailored to the assessment of software that uses serialization. In particular, we introduce a taxonomy of abstractions and terms to semantically characterize and classify the main data-exchange cases, which serialization can be involved with. The resulting conceptual framework provides a means to forecast how erroneous implementations of serialization would look like in different cases, thus enabling the focusing of testing and analysis techniques to address serialization-related faults
Recommended from our members
Metacontent based techniques for the regression testing of component based software : a case study
Component based software technologies are viewed as essential for creating the software systems of the future. However, the use of externally provided component has serious drawbacks for a wide range of software engineering activities often because of a lack of information about the components. One such drawback involves validation of components. To address this problem previous researcher have proposed the notion of metacontent. Metacontent describes static and dynamic aspects of a component, and consists of information (metadata) about components, and utilities (metamethods) for computing and retrieving such information. In this project we implement three new metacontent based techniques that address the problem of validating component based applications after they have been modified (also known a "regression testing"): a code based approach, a specification based approach, and a hybrid approach that use information both at the code and at the specification level. We present a case study that applies all three technique to a real component based system
- …