9 research outputs found

    Test Cases Evolution of Mobile Applications: Model Driven Approach

    Get PDF
    AELOS_HCERES2020 , NAOMOD_HCERES2020Mobile Applications Developers, with large freedom given to them, focus on satisfying market requirements and on pleasing consumer’s desires. They are forced to be creative and productive in a short period of time. As a result, billions of powerful mobile applications are displayed every day. Therefore, every mobile application needs to continually change and make an incremental evolution in order to survive and preserve its ranking among the top applications in the market. Mobile apps Testers hold a heavy responsibility on their shoulders, the intrinsic nature of agile swift change of mobile apps pushes them to be meticulous, to be aware that things can be different at any time, and to be prepared for unpredicted crashes. Therefore, starting the generation or the creation of test cases from scratch and selecting each time the overridden or the overloaded test cases is a tedious operation. In software testing the time allocated for testing and correcting defects is important for every software development (regularly half the time). This time can be reduced by the introduction of tools and the adoption of new testing methods. In the field of mobile development, new concerns should be taken into account; among the most important ones are the heterogeneity of execution environments and the fragmentation of terminals which have different impacts on the functionality, performance, and connectivity. This project studies the evolution of mobile applications and its impact on the evolution of test cases from their creation until their expiration stage. A detailed case study of a native open source Android application is provided; describing many aspects of design, development, testing in addition to the analysis of the process of mobile apps evolution. This project based on model driven engineering approach where the models are serialized using the standard XMI. It presents a protocol for the adaptation of test cases under certain restrictions

    StubCoder: Automated Generation and Repair of Stub Code for Mock Objects

    Full text link
    Mocking is an essential unit testing technique for isolating the class under test (CUT) from its dependencies. Developers often leverage mocking frameworks to develop stub code that specifies the behaviors of mock objects. However, developing and maintaining stub code is labor-intensive and error-prone. In this paper, we present StubCoder to automatically generate and repair stub code for regression testing. StubCoder implements a novel evolutionary algorithm that synthesizes test-passing stub code guided by the runtime behavior of test cases. We evaluated our proposed approach on 59 test cases from 13 open-source projects. Our evaluation results show that StubCoder can effectively generate stub code for incomplete test cases without stub code and repair obsolete test cases with broken stub code.Comment: This paper was accepted by the ACM Transactions on Software Engineering and Methodology (TOSEM) in July 202

    Why Do Record/Replay Tests of Web Applications Break?

    Get PDF
    Software engineers often use record/replay tools to enable the automated testing of web applications. Tests created in this manner can then be used to regression test new versions of the web applications as they evolve. Web application tests recorded by record/replay tools, however, can be quite brittle; they can easily break as applications change. For this reason, researchers have begun to seek approaches for automatically repairing record/replay tests. To date, however, there have been no comprehensive attempts to characterize the causes of breakages in record/replay tests for web applications. In this work, we present a taxonomy classifying the ways in which record/replay tests for web applications break, based on an analysis of 453 versions of popular web applications for which 1065 individual test breakages were recognized. The resulting taxonomy can help direct researchers in their attempts to repair such tests. It can also help practitioners by suggesting best practices when creating tests or modifying programs, and can help researchers with other tasks such as test robustness analysis and IDE design

    Understanding and Generating Patches for Bugs Introduced by Third-party Library Upgrades

    Get PDF
    During the process of software development, developers rely heavily on third-party libraries to enable functionalities and features in their projects. However, developers are faced with challenges of managing dependency messes when a project evolves. One of the most challenging problems is to handle issues caused by dependency upgrades. To better understand the issues caused by Third-party Library Upgrades (TLU), in this thesis, we conduct a comprehensive study of the bugs caused by dependency upgrades. The study is conducted on a collection of 8,952 open-source Java projects from GitHub and 304 Java projects on Apache Software Foundation (ASF) JIRA systems. We collect 83 bugs caused by inappropriate TLUs in total. Our inspection shows that TLUs are conducted out of different reasons. The most popular reason is that the project is preparing for release and wants to keep its dependencies up-to-date (62.3%). Another popular reason is that the older version of a dependency is not compatible with other dependencies (15.3%). Our inspection also indicates that the problems introduced by inappropriate dependency upgrades can be categorized into different types, i.e., program failures that are detectable statically and dynamically. Then, we investigate developers’ efforts on repairing bugs caused by inappropriate TLUs. We notice that 32.53% of these bugs can be fixed by only modifying the build scripts (which we call TLU-build bugs), 20.48% of them can be fixed by merely modifying the source code (which is called TLU-code bugs), and 16.87% of them require modifications in multiple sources. TLU-build bugs and TLU-code bugs as the two most popular types, are explored more by us. For TLU-code bugs, we summarize the common ways used to fix them. Furthermore, we study whether current repair techniques can fix TLU-code bugs efficiently. For the 14 TLU-code bugs that cause test failures and runtime failures, the study shows that existing automated program repair tools can only work on 6 of the 14 investigated bugs. Each of them can only fix a limited amount of the 6 bugs, but the union of them can finally fix 5 out of 6 bugs. For TLU-build bugs, by leveraging the knowledge from our study, we summarize common patterns to fix build scripts, and propose a technique to automatically fix them. Our evaluation shows the proposed technique can successfully fix 9 out of 14 TLU-build bugs

    Why do Record/Replay Tests of Web Applications Break?

    Full text link
    Software engineers often use record/replay tools to enable the automated testing of web applications. Tests created in this manner can then be used to regression test new versions of the web applications as they evolve. Web application tests recorded by record/replay tools, however, can be quite brittle, they can easily break as applications change. For this reason, researchers have begun to seek approaches for automatically repairing record/replay tests. To date, however, there have been no comprehensive attempts to characterize the causes of breakagesin record/replay tests for web applications. In this work, wepresent a taxonomy classifying the ways in which record/replay tests for web applications break, based on an analysis of 453 versions of popular web applications for which 1065 individual test breakages were recognized. The resulting taxonomy can help direct researchers in their attempts to repair such tests. It can also help practitioners by suggesting best practices when creating tests or modifying programs, and can help researchers with other tasks such as test robustness analysis and IDE design

    Search Techniques for Code Generation

    Get PDF
    This dissertation explores techniques that synthesize and generate program fragments and test inputs. The main goal of these techniques is to improve and support automation in program synthesis and test input generation. This is important because performing those processes manually is often tedious, time consuming and error prone. The main challenge that these techniques face is exploring the search space in efficient and scalable ways. In the first part of the dissertation, we present tools InSynth and PolySynth that interactively synthesize code fragments. They take as input a partial program and automatically extract type information, the desired type, and set of visible declarations. They use this input to synthesize ranked list of expressions with the desired type. Finally, they present the expressions to a developer in similar manner to code completions in modern IDEs. InSynth is the first tool that uses a complete algorithm to generate expressions with first class functions and higher order functions. We present the theoretical foundation of the InSynth problem, that is based on type inhabitation, and the type-based backward search algorithm that solves it. PolySynth uses type-driven, resolution based algorithm that considers polymorphic types (generics) to generate expressions. Furthermore, the uniqueness of both tools comes from the fact that their algorithms operate using corpus statistics. The statistics are used to steer the algorithms and the search space exploration towards the most relevant solutions. In the second part of the dissertation we present the tool anyCode that uses natural language input to synthesize expressions. As input it accepts English words or Java program language constructs. This allows a developer to encode her intuition about the desired expression using words or the expression that approximates the desired structure. Thanks to this flexibility, anyCode can also repair broken expressions. It uses a pipeline of natural language and related-word tools to analyze the input. This helps anyCode to identify the set of the most relevant components and to reduce the size of search space. To further reduce the size of search space and to create the most relevant expressions, anyCode uses two statistical models: unigram and probabilistic context free grammar. Finally, in the last part of the dissertation we present UDITA, a Java-like language with support for non-determinism, which allows a user to describe test generation programs. Test generation programs are run on a top of Java PathFinder (JPF), a popular explicit-state model checker, that has a built-in backtracking mechanism and supports non-determinism. Using UDITA programs, JPF generates test inputs. The first benefit of UDITA is that non-determinism empowers a user to describe many test inputs as easily as describing a single test input. The second benefit is that it gives a user more flexibility allowing her to describe test generation programs by arbitrarily combining filters and generators. UDITA reduces the size of search space using an algorithm that reduces the number of generated complex isomorphic structures and that delays non-deterministic choices
    corecore