8 research outputs found

    Survey of source code metrics for evaluating testability of object oriented systems

    No full text
    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide testability metrics that are proved to be intuitive and adequate for the testing cost prediction

    An annotated and classified bibliography of software metrics publications : 1988 to 1994

    Get PDF
    With the growth of the software industry, the measurement of software plays an ever increasing role. In order to provide software metric researchers and practitioners with references so they can quickly identify the references of particular interest to them, over 60 of the many publications on software metrics that have appeared since 1988 are classified into four tables that comprise, respectively, (1) Metrics through the Life Cycle, (2) Classic Metrics, (3) Programming Language Metrics, and (4) New Metrics. Table 1 serves as a complete list of all the classified publications while Table 2, Table 3 and Table 4 are subsets of Table 1. The subset tables present more detailed information than Table 1. The bibliographic reference section contains brief summaries of the publications in the classified tables. As a continuation of the 1988 survey done by V. Cote, P. Bourque, S. Oligny and N. Rivard through the paper, "Software metrics: an overview of recent results", this project was conducted to discover the current trends in software metrics practice, and to report the trend movement from the 1988 paper until now by comparison of the results from the two surveys. All the table comparisons from the two surveys are given in percentages. As a survey, we are fully aware of the limitations of our collection out of the wealth of the publications in the software metrics field, but we are confident that our survey is a good indicator of the practice in the software metrics field. [Résumé abrégé par UMI]

    SOFTWARE TESTABILITY MEASURE FOR SAE ARCHITECTURE ANALYSIS AND DESIGN LANGUAGE (AADL)SOFTWARE TESTABILITY MEASURE FOR SAE ARCHITECTURE ANALYSIS AND DESIGN LANGUAGE (AADL)

    Get PDF
    Testability is an important quality attribute of software, especially for critical systems such as avionics, medical, and automotive. Improvement in the early testability of software architecture, the first artifact of the software system, will help reduce issues and costs later in the development process. AADL, an architecture analysis description language suitable for critical embedded, real-time systems, can be used for design documentation, analysis and code generation. Because the capability of AADL can be extended, it is possible to add new analyses to its core language. Tools such as the Open Source AADL Tool Environment (OSATE) provide plugins for processing AADL models. Although adding new plugins in OSATE extends AADL, there currently exists no AADL extension for testability measurement. The purpose of this thesis is to propose such a method to measure the testability of AADL models as well as to develop a testability plugin in OSATE. Much research has been conducted on testability of hardware, software and embedded systems, resulting in several approaches for measuring this quality attribute. Among them, the approach measuring testability as a product of controllability and observability using information transfer graph (ITG) is the most applicable for measuring the testability of AADL models. This thesis proposes a method applying this approach to AADL models. A complete testability measure plugin for OSATE was developed based on this approach and detailed examples are given in this thesis to demonstrate its applicability

    A survey on software testability

    Full text link
    Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers. Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability. Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects

    Évaluation de l'impact de l'introduction des aspects sur la testabilité des programmes

    Get PDF

    Orientation de l'effort des tests unitaires dans les systèmes orientés objet : une approche basée sur les métriques logicielles

    Get PDF
    Les logiciels actuels sont de grandes tailles, complexes et critiques. Le besoin de qualité exige beaucoup de tests, ce qui consomme de grandes quantités de ressources durant le développement et la maintenance de ces systèmes. Différentes techniques permettent de réduire les coûts liés aux activités de test. Notre travail s’inscrit dans ce cadre, est a pour objectif d’orienter l’effort de test vers les composants logiciels les plus à risque à l’aide de certains attributs du code source. À travers plusieurs démarches empiriques menées sur de grands logiciels open source, développés avec la technologie orientée objet, nous avons identifié et étudié les métriques qui caractérisent l’effort de test unitaire sous certains angles. Nous avons aussi étudié les liens entre cet effort de test et les métriques des classes logicielles en incluant les indicateurs de qualité. Les indicateurs de qualité sont une métrique synthétique, que nous avons introduite dans nos travaux antérieurs, qui capture le flux de contrôle ainsi que différentes caractéristiques du logiciel. Nous avons exploré plusieurs techniques permettant d’orienter l’effort de test vers des composants à risque à partir de ces attributs de code source, en utilisant des algorithmes d’apprentissage automatique. En regroupant les métriques logicielles en familles, nous avons proposé une approche basée sur l’analyse du risque des classes logicielles. Les résultats que nous avons obtenus montrent les liens entre l’effort de test unitaire et les attributs de code source incluant les indicateurs de qualité, et suggèrent la possibilité d’orienter l’effort de test à l’aide des métriques.Current software systems are large, complex and critical. The need for quality requires a lot of tests that consume a large amount of resources during the development and the maintenance of systems. Different techniques are used to reduce the costs of testing activities. Our work is in this context. It aims to guide the unit testing effort distribution on the riskiest software components using the source code attributes. We conducted several empirical analyses on different large object-oriented open source software systems. We identified and studied several metrics that characterize the unit testing effort according to different perspectives. We also studied their relationships with the software class metrics including quality indicators. The quality indicators are a synthetic metric that we introduced in our previous work. It captures control flow and different software attributes. We explored different approaches for unit testing effort orientation using source code attributes and machine learning algorithms. By grouping software metrics, we proposed an effort orientation approach based on software class risk analysis. In addition to the significant relationships between testing metrics and source code attributes, the results we obtained suggest the possibility of using source code metrics for unit testing effort orientation
    corecore