2 research outputs found

    A survey on software testability

    Full text link
    Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers. Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability. Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects

    Investigando Comunicabilidade e Testabilidade com a ferramenta Signifying APIs

    Get PDF
    Given the diversity of information systems today, communicationbetween services requires that APIs be well designed and understoodby both services producers and consumers. Poorly documentedAPIs lead to misunderstandings by developers and testersteams who end up designing ineffective test cases. As a result, theymay produce software with low quality and avoidable errors. Thisstudy investigates the ability of the SigniFYIng APIs tool to supportthe testability of the applications consuming APIs. In this paper, weproposed a process to support APIs’ testability with the SigniFYIngAPIs tool. We validated the process with a real case study basedon two Brazilian federal government APIs: the leniency agreementAPI and the federal servants API. As a result, it was possible todevelop better test cases for the chosen APIs, bringing evidencethat the proposed process can support designing more suitable testcases for APIs and improving the testability of the software to beproduced
    corecore