3,462 research outputs found

    A SOFTWARE TESTING ASSESSMENT TO MANAGE PROJECT TESTABILITY

    Get PDF
    The demand for testing services is, to a large extend a ?derived demand? influenced directly by the manner in which prior developed activities are undertaken. The early stages of a structured software development life cycle (SDLC) project can often run behind schedule, shrinking the time available for performing adequate testing especially when software release deadlines have to be met. This situation fosters the need to influence pre-testing activities and manage the testing effort efficiently. Our research examines how to measure testability of a SDLC project before testing begins. It builds on the ?design for testability? perspective by introducing a ?manage for testability? perspective. Software testability focuses on whether the activities of the SDLC process are progressing in ways that enable the testing team to find software product defects if they exist. To address this challenge, we develop a software testing assessment. This assessment is designed to provide testing managers with information needed to: (1) influence pre-testing activities in ways that ultimately increase testing efficiency and effectiveness, and (2) plan testing resources to optimize efficient and effective testing. We developed specific software testing assessment measures through interviews with key informants. We present data collected for the measures for large-scale structured software development projects to illustrate the assessment?s usefulness and application

    A Testability Analysis Framework for Non-Functional Properties

    Full text link
    This paper presents background, the basic steps and an example for a testability analysis framework for non-functional properties

    A research review of quality assessment for software

    Get PDF
    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness

    A Reference Model for Software and System Inspections. White Paper

    Get PDF
    Software Quality Assurance (SQA) is an important component of the software development process. SQA processes provide assurance that the software products and processes in the project life cycle conform to their specified requirements by planning, enacting, and performing a set of activities to provide adequate confidence that quality is being built into the software. Typical techniques include: (1) Testing (2) Simulation (3) Model checking (4) Symbolic execution (5) Management reviews (6) Technical reviews (7) Inspections (8) Walk-throughs (9) Audits (10) Analysis (complexity analysis, control flow analysis, algorithmic analysis) (11) Formal method Our work over the last few years has resulted in substantial knowledge about SQA techniques, especially the areas of technical reviews and inspections. But can we apply the same QA techniques to the system development process? If yes, what kind of tailoring do we need before applying them in the system engineering context? If not, what types of QA techniques are actually used at system level? And, is there any room for improvement.) After a brief examination of the system engineering literature (especially focused on NASA and DoD guidance) we found that: (1) System and software development process interact with each other at different phases through development life cycle (2) Reviews are emphasized in both system and software development. (Figl.3). For some reviews (e.g. SRR, PDR, CDR), there are both system versions and software versions. (3) Analysis techniques are emphasized (e.g. Fault Tree Analysis, Preliminary Hazard Analysis) and some details are given about how to apply them. (4) Reviews are expected to use the outputs of the analysis techniques. In other words, these particular analyses are usually conducted in preparation for (before) reviews. The goal of our work is to explore the interaction between the Quality Assurance (QA) techniques at the system level and the software level

    A Case Study on Artefact-based RE Improvement in Practice

    Get PDF
    Most requirements engineering (RE) process improvement approaches are solution-driven and activity-based. They focus on the assessment of the RE of a company against an external norm of best practices. A consequence is that practitioners often have to rely on an improvement approach that skips a profound problem analysis and that results in an RE approach that might be alien to the organisational needs. In recent years, we have developed an RE improvement approach (called \emph{ArtREPI}) that guides a holistic RE improvement against individual goals of a company putting primary attention to the quality of the artefacts. In this paper, we aim at exploring ArtREPI's benefits and limitations. We contribute an industrial evaluation of ArtREPI by relying on a case study research. Our results suggest that ArtREPI is well-suited for the establishment of an RE that reflects a specific organisational culture but to some extent at the cost of efficiency resulting from intensive discussions on a terminology that suits all involved stakeholders. Our results reveal first benefits and limitations, but we can also conclude the need of longitudinal and independent investigations for which we herewith lay the foundation

    ISO-Standardized Requirements Activities for Very Small Entities

    No full text
    International audienceThe use of Software Engineering standards may promote recognized and valuable engineering practices for Very Small Entities (VSEs) but these standards do not fit the needs of VSEs. The ISO/IEC Working Group 24 (WG24) is developing the ISO/IEC 29110 standard Lifecycle profiles for Very Small Entities; this standard is due for approval in June 2010. A pilot project about ISO 29110 use has been established between our Software Engineering group and a 14-person company building and selling counting systems about the frequentation levels of public and private sites. The pilot project aims to help VSEs deliver the Software Requirements Specification, Test Cases and Test Procedures for a new web-based system intended to manage fleets of counting systems. As the project goes along, it appears that the 29110 set of documents was not up to the task of sustaining this VSE in its engineering activities. We supported the VSE in two ways: (i) a Training Session based on the 29110 Requirements Analysis activity, and (ii) Self-Training Packages - a set of resources intended to develop experience and skills in Requirements Identification and SW Requirement Specification (SRS). Our inspiration stems from the 15504-5 standard with a desire to provide software engineers with an exemplar set of base practices providing a definition of the tasks and activities needed to fulfil the process (e.g. requirements) outcomes. Task definition is collected on a task card. The results of this pilot study provide the VSE with a roadmap through the Requirements activity, which is compatible with the ISO/IEC 29110 standard
    • …
    corecore