2,970 research outputs found

    MISSED: an environment for mixed-signal microsystem testing and diagnosis

    Get PDF
    A tight link between design and test data is proposed for speeding up test-pattern generation and diagnosis during mixed-signal prototype verification. Test requirements are already incorporated at the behavioral level and specified with increased detail at lower hierarchical levels. A strict distinction between generic routines and implementation data makes reuse of software possible. A testability-analysis tool and test and DFT libraries support the designer to guarantee testability. Hierarchical backtrace procedures in combination with an expert system and fault libraries assist the designer during mixed-signal chip debuggin

    Design for validation: An approach to systems validation

    Get PDF
    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center

    SOFTWARE TESTABILITY MEASURE FOR SAE ARCHITECTURE ANALYSIS AND DESIGN LANGUAGE (AADL)SOFTWARE TESTABILITY MEASURE FOR SAE ARCHITECTURE ANALYSIS AND DESIGN LANGUAGE (AADL)

    Get PDF
    Testability is an important quality attribute of software, especially for critical systems such as avionics, medical, and automotive. Improvement in the early testability of software architecture, the first artifact of the software system, will help reduce issues and costs later in the development process. AADL, an architecture analysis description language suitable for critical embedded, real-time systems, can be used for design documentation, analysis and code generation. Because the capability of AADL can be extended, it is possible to add new analyses to its core language. Tools such as the Open Source AADL Tool Environment (OSATE) provide plugins for processing AADL models. Although adding new plugins in OSATE extends AADL, there currently exists no AADL extension for testability measurement. The purpose of this thesis is to propose such a method to measure the testability of AADL models as well as to develop a testability plugin in OSATE. Much research has been conducted on testability of hardware, software and embedded systems, resulting in several approaches for measuring this quality attribute. Among them, the approach measuring testability as a product of controllability and observability using information transfer graph (ITG) is the most applicable for measuring the testability of AADL models. This thesis proposes a method applying this approach to AADL models. A complete testability measure plugin for OSATE was developed based on this approach and detailed examples are given in this thesis to demonstrate its applicability

    A SOFTWARE TESTING ASSESSMENT TO MANAGE PROJECT TESTABILITY

    Get PDF
    The demand for testing services is, to a large extend a ?derived demand? influenced directly by the manner in which prior developed activities are undertaken. The early stages of a structured software development life cycle (SDLC) project can often run behind schedule, shrinking the time available for performing adequate testing especially when software release deadlines have to be met. This situation fosters the need to influence pre-testing activities and manage the testing effort efficiently. Our research examines how to measure testability of a SDLC project before testing begins. It builds on the ?design for testability? perspective by introducing a ?manage for testability? perspective. Software testability focuses on whether the activities of the SDLC process are progressing in ways that enable the testing team to find software product defects if they exist. To address this challenge, we develop a software testing assessment. This assessment is designed to provide testing managers with information needed to: (1) influence pre-testing activities in ways that ultimately increase testing efficiency and effectiveness, and (2) plan testing resources to optimize efficient and effective testing. We developed specific software testing assessment measures through interviews with key informants. We present data collected for the measures for large-scale structured software development projects to illustrate the assessment?s usefulness and application

    A survey on software testability

    Full text link
    Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers. Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability. Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects

    Survey of source code metrics for evaluating testability of object oriented systems

    No full text
    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide testability metrics that are proved to be intuitive and adequate for the testing cost prediction

    A research review of quality assessment for software

    Get PDF
    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness

    Definition, implementation and validation of energy code smells: an exploratory study on an embedded system

    Get PDF
    Optimizing software in terms of energy efficiency is one of the challenges that both research and industry will have to face in the next few years.We consider energy efficiency as a software product quality characteristic, to be improved through the refactoring of appropriate code pattern: the aim of this work is identifying those code patterns, hereby defined as Energy Code Smells, that might increase the impact of software over power consumption. For our purposes, we perform an experiment consisting in the execution of several code patterns on an embedded system. These code patterns are executed in two versions: the first one contains a code issue that could negatively impact power consumption, the other one is refactored removing the issue. We measure the power consumption of the embedded device during the execution of each code pattern. We also track the execution time to investigate whether Energy Code Smells are also Performance Smells. Our results show that some Energy Code Smells actually have an impact over power consumption in the magnitude order of micro Watts. Moreover, those Smells did not introduce a performance decreas

    Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool

    Get PDF
    This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manua
    corecore