4 research outputs found
Testability Assessment Model for Object Oriented Software based on Internal and External Quality Factors
Software testability is coming out to be most frequent talked about subject then the underrated and unpopular quality factor it used to be in past few years. The correct and timely assessment of testability can lead to improvisation of software testing process. Though many researchers and quality controllers have proved its importance, but still the research has not gained much momentum in emphasizing the need of making testability analysis necessary during all software development phases. In this paper we review and analyse the factors affecting testability estimation of object oriented software systems during design and analysis phase of development life cycle. These factors are then linked together in the form of new assessment model for object oriented software testability. The proposed model will be evaluated using analytical hierarchical process (AHP)
Towards a 'safe' use of design patterns to improve oo software testability
International audienceDesign-for-testability is a very important issue in software engineering. It becomes crucial in the case of OO designs where control flows are generally not hierarchical, but are diffuse and distributed over the whole architecture. We introduce the concept of a "testing conflict" when potentially concurrent client/supplier relationships between the same classes along different paths exist in a system. Such conflicts may be hard to test, especially when dynamic binding and polymorphism are involved. We describe the conflicts using topological class configuration diagrams. An overall architecture is represented as a combination of the initial design and several patterns. We focus on the design patterns as coherent subsets in the architecture, and we explain how their use can provide a way for limiting the complexity of testing for conflicts, and of confining their effects to the classes involved in the pattern
A survey on software testability
Context: Software testability is the degree to which a software system or a
unit under test supports its own testing. To predict and improve software
testability, a large number of techniques and metrics have been proposed by
both practitioners and researchers in the last several decades. Reviewing and
getting an overview of the entire state-of-the-art and state-of-the-practice in
this area is often challenging for a practitioner or a new researcher.
Objective: Our objective is to summarize the body of knowledge in this area and
to benefit the readers (both practitioners and researchers) in preparing,
measuring and improving software testability. Method: To address the above
need, the authors conducted a survey in the form of a systematic literature
mapping (classification) to find out what we as a community know about this
topic. After compiling an initial pool of 303 papers, and applying a set of
inclusion/exclusion criteria, our final pool included 208 papers. Results: The
area of software testability has been comprehensively studied by researchers
and practitioners. Approaches for measurement of testability and improvement of
testability are the most-frequently addressed in the papers. The two most often
mentioned factors affecting testability are observability and controllability.
Common ways to improve testability are testability transformation, improving
observability, adding assertions, and improving controllability. Conclusion:
This paper serves for both researchers and practitioners as an "index" to the
vast body of knowledge in the area of testability. The results could help
practitioners measure and improve software testability in their projects