228 research outputs found

    Private API Access and Functional Mocking in Automated Unit Test Generation

    Get PDF
    Not all object oriented code is easily testable: Dependency objects might be difficult or even impossible to instantiate, and object-oriented encapsulation makes testing potentially simple code difficult if it cannot easily be accessed. When this happens, then developers can resort to mock objects that simulate the complex dependencies, or circumvent object-oriented encapsulation and access private APIs directly through the use of, for example, Java reflection. Can automated unit test generation benefit from these techniques as well? In this paper we investigate this question by extending the EvoSuite unit test generation tool with the ability to directly access private APIs and to create mock objects using the popular Mockito framework. However, care needs to be taken that this does not impact the usefulness of the generated tests: For example, a test accessing a private field could later fail if that field is renamed, even if that renaming is part of a semantics-preserving refactoring. Such a failure would not be revealing a true regression bug, but is a false positive, which wastes the developer's time for investigating and fixing the test. Our experiments on the SF110 and Defects4J benchmarks confirm the anticipated improvements in terms of code coverage and bug finding, but also confirm the existence of false positives. However, by ensuring the test generator only uses mocking and reflection if there is no other way to reach some part of the code, their number remains small

    An anti-malware product test orchestration solution for multiple pluggable environments

    Get PDF
    The term automation gets thrown around a lot these days in the software industry. However, the recent change in test automation in the software engineering process is driven by multiple factors such as environmental factors, both external and internal as well as industry-driven factors. Simply, what we all understand about automation is - the use of some technologies to operate a task. The choice of the right tools, be it in-house or any third-party software, can increase effectiveness, efficiency and coverage of the security product testing. Often, test environments are maintained at various stages in the testing process. Developer’s test, dedicated test, integration test and pre-production or business readiness test are some common phrases in software testing. On the other hand, abstraction is often included between different architectural layers, ever-changing providers of virtualization platforms such as VMWare, OpenStack, AWS as test execution environments and many others with a different state of maintainability. As there is an obvious mismatch in configuration between development, testing and production environment; software testing process is often slow and tedious for many organizations due to the lack of collaboration between IT Operations and Software Development teams. Because of this, identifying and addressing test environmentrelated compatibility becomes a major concern for QA teams. In this context, this thesis presents a DevOps approach and implementation method of an automated test execution solution named OneTA that can interact with multiple test environments including isolated malware test environments. The study was performed to identify a common way of preparing test environments in in-house and publicly available virtualization platforms where distributed tests can run on a regular basis. The current solution allows security product testing in multiple pluggable environments in a single setup utilizing the modern DevOps practice to result minimum efforts. This thesis project was carried out in collaboration with F-Secure, a leading cyber security company in Finland. The project deals with the company’s internal environments for test execution. It explores the available infrastructures so that software development team can use this solution as a test execution tool

    The State of Practice for Security Unit Testing: Towards Data Driven Strategies to Shift Security into Developer\u27s Automated Testing Workflows

    Get PDF
    The pressing need to “shift security left” in the software development lifecycle has motivated efforts to adapt the iterative and continuous process models used in practice today. Security unit testing is praised by practitioners and recommended by expert groups, usually in the context of DevSecOps and achieving “continuous security”. In addition to vulnerability testing and standards adherence, this technique can help developers verify that security controls are implemented correctly, i.e. functional security testing. Further, the means by which security unit testing can be integrated into developer workflows is unique from other standalone tools as it is an adaptation of practices and infrastructure developers are already familiar with. Yet, software engineering researchers have so far failed to include this technique in their empirical studies on secure development and little is known about the state of practice for security unit testing. This dissertation is motivated by the disconnect between promotion of security unit testing and the lack of empirical evidence on how it is and can be applied. The goal of this work was to address the disconnect towards identifying actionable strategies to promote wider adoption and mitigate observed challenges. Three mixed-method empirical studies were conducted wherein practitioner-authored unit test code, Q&A posts, and grey literature were analyzed through three lenses: Practices (what they do), Perspectives and Guidelines (what and how they think it should be done), and Pain Points (what challenges they face) to incorporate both technical and human factors of this phenomena. Accordingly, this work contributes novel and important insights into how developers write functional unit tests for at least nine security controls, including a taxonomy of 53 authentication unit test cases derived from real code and a detailed analysis of seven unique pain points that developers seek help with from peers on Q&A sites. Recommendations given herein for conducting and adopting security unit testing, including mitigating challenges and addressing gaps between available and needed support, are grounded in the guidelines and perspectives on the benefits, limitations, use cases, and integration strategies shared in grey literature authored by practitioners

    StubCoder: Automated Generation and Repair of Stub Code for Mock Objects

    Full text link
    Mocking is an essential unit testing technique for isolating the class under test (CUT) from its dependencies. Developers often leverage mocking frameworks to develop stub code that specifies the behaviors of mock objects. However, developing and maintaining stub code is labor-intensive and error-prone. In this paper, we present StubCoder to automatically generate and repair stub code for regression testing. StubCoder implements a novel evolutionary algorithm that synthesizes test-passing stub code guided by the runtime behavior of test cases. We evaluated our proposed approach on 59 test cases from 13 open-source projects. Our evaluation results show that StubCoder can effectively generate stub code for incomplete test cases without stub code and repair obsolete test cases with broken stub code.Comment: This paper was accepted by the ACM Transactions on Software Engineering and Methodology (TOSEM) in July 202

    Generating mock skeletons for lightweight Web service testing : a thesis presented in partial fulïŹlment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, ManawatĆ« New Zealand

    Get PDF
    Modern application development allows applications to be composed using lightweight HTTP services. Testing such an application requires the availability of services that the application makes requests to. However, continued access to dependent services during testing may be restrained, making adequate testing a significant and non-trivial engineering challenge. The concept of Service Virtualisation is gaining popularity for testing such applications in isolation. It is a practise to simulate the behaviour of dependent services by synthesising responses using semantic models inferred from recorded traffic. Replacing services with their respective mocks is, therefore, useful to address their absence and move on application testing. In reality, however, it is unlikely that fully automated service virtualisation solutions can produce highly accurate proxies. Therefore, we recommend using service virtualisation to infer some attributes of HTTP service responses. We further acknowledge that engineers often want to fine-tune this. This requires algorithms to produce readily interpretable and customisable results. We assume that if service virtualisation is based on simple logical rules, engineers would have the potential to understand and customise rules. In this regard, Symbolic Machine Learning approaches can be investigated because of the high provenance of their results. Accordingly, this thesis examines the appropriateness of symbolic machine learning algorithms to automatically synthesise HTTP services' mock skeletons from network traffic recordings. We consider four commonly used symbolic techniques: the C4.5 decision tree algorithm, the RIPPER and PART rule learners, and the OCEL description logic learning algorithm. The experiments are performed employing network traffic datasets extracted from a few different successful, large-scale HTTP services. The experimental design further focuses on the generation of reproducible results. The chosen algorithms demonstrate the suitability of training highly accurate and human-readable semantic models for predicting the key aspects of HTTP service responses, such as the status and response headers. Having human-readable logics would make interpretation of the response properties simpler. These mock skeletons can then be easily customised to create mocks that can generate service responses suitable for testing

    Continuous, Evolutionary and Large-Scale: A New Perspective for Automated Mobile App Testing

    Full text link
    Mobile app development involves a unique set of challenges including device fragmentation and rapidly evolving platforms, making testing a difficult task. The design space for a comprehensive mobile testing strategy includes features, inputs, potential contextual app states, and large combinations of devices and underlying platforms. Therefore, automated testing is an essential activity of the development process. However, current state of the art of automated testing tools for mobile apps poses limitations that has driven a preference for manual testing in practice. As of today, there is no comprehensive automated solution for mobile testing that overcomes fundamental issues such as automated oracles, history awareness in test cases, or automated evolution of test cases. In this perspective paper we survey the current state of the art in terms of the frameworks, tools, and services available to developers to aid in mobile testing, highlighting present shortcomings. Next, we provide commentary on current key challenges that restrict the possibility of a comprehensive, effective, and practical automated testing solution. Finally, we offer our vision of a comprehensive mobile app testing framework, complete with research agenda, that is succinctly summarized along three principles: Continuous, Evolutionary and Large-scale (CEL).Comment: 12 pages, accepted to the Proceedings of 33rd IEEE International Conference on Software Maintenance and Evolution (ICSME'17

    Automated Unit Testing of Evolving Software

    Get PDF
    As software programs evolve, developers need to ensure that new changes do not affect the originally intended functionality of the program. To increase their confidence, developers commonly write unit tests along with the program, and execute them after a change is made. However, manually writing these unit-tests is difficult and time-consuming, and as their number increases, so does the cost of executing and maintaining them. Automated test generation techniques have been proposed in the literature to assist developers in the endeavour of writing these tests. However, it remains an open question how well these tools can help with fault finding in practice, and maintaining these automatically generated tests may require extra effort compared to human written ones. This thesis evaluates the effectiveness of a number of existing automatic unit test generation techniques at detecting real faults, and explores how these techniques can be improved. In particular, we present a novel multi-objective search-based approach for generating tests that reveal changes across two versions of a program. We then investigate whether these tests can be used such that no maintenance effort is necessary. Our results show that overall, state-of-the-art test generation tools can indeed be effective at detecting real faults: collectively, the tools revealed more than half of the bugs we studied. We also show that our proposed alternative technique that is better suited to the problem of revealing changes, can detect more faults, and does so more frequently. However, we also find that for a majority of object-oriented programs, even a random search can achieve good results. Finally, we show that such change-revealing tests can be generated on demand in practice, without requiring them to be maintained over time
    • 

    corecore