46 research outputs found

    Continuous, Evolutionary and Large-Scale: A New Perspective for Automated Mobile App Testing

    Full text link
    Mobile app development involves a unique set of challenges including device fragmentation and rapidly evolving platforms, making testing a difficult task. The design space for a comprehensive mobile testing strategy includes features, inputs, potential contextual app states, and large combinations of devices and underlying platforms. Therefore, automated testing is an essential activity of the development process. However, current state of the art of automated testing tools for mobile apps poses limitations that has driven a preference for manual testing in practice. As of today, there is no comprehensive automated solution for mobile testing that overcomes fundamental issues such as automated oracles, history awareness in test cases, or automated evolution of test cases. In this perspective paper we survey the current state of the art in terms of the frameworks, tools, and services available to developers to aid in mobile testing, highlighting present shortcomings. Next, we provide commentary on current key challenges that restrict the possibility of a comprehensive, effective, and practical automated testing solution. Finally, we offer our vision of a comprehensive mobile app testing framework, complete with research agenda, that is succinctly summarized along three principles: Continuous, Evolutionary and Large-scale (CEL).Comment: 12 pages, accepted to the Proceedings of 33rd IEEE International Conference on Software Maintenance and Evolution (ICSME'17

    Automatically Discovering, Reporting and Reproducing Android Application Crashes

    Full text link
    Mobile developers face unique challenges when detecting and reporting crashes in apps due to their prevailing GUI event-driven nature and additional sources of inputs (e.g., sensor readings). To support developers in these tasks, we introduce a novel, automated approach called CRASHSCOPE. This tool explores a given Android app using systematic input generation, according to several strategies informed by static and dynamic analyses, with the intrinsic goal of triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented crash report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). We evaluated CRASHSCOPE's effectiveness in discovering crashes as compared to five state-of-the-art Android input generation tools on 61 applications. The results demonstrate that CRASHSCOPE performs about as well as current tools for detecting crashes and provides more detailed fault information. Additionally, in a study analyzing eight real-world Android app crashes, we found that CRASHSCOPE's reports are easily readable and allow for reliable reproduction of crashes by presenting more explicit information than human written reports.Comment: 12 pages, in Proceedings of 9th IEEE International Conference on Software Testing, Verification and Validation (ICST'16), Chicago, IL, April 10-15, 2016, pp. 33-4

    Automating Test Case Generation for Android Applications using Model-based Testing

    Get PDF
    Testing of mobile applications (apps) has its quirks as numerous events are required to be tested. Mobile apps testing, being an evolving domain, carries certain challenges that should be accounted for in the overall testing process. Since smartphone apps are moderate in size so we consider that model-based testing (MBT) using state machines and statecharts could be a promising option for ensuring maximum coverage and completeness of test cases. Using model-based testing approach, we can automate the tedious phase of test case generation, which not only saves time of the overall testing process but also minimizes defects and ensures maximum test case coverage and completeness. In this paper, we explore and model the most critical modules of the mobile app for generating test cases to ascertain the efficiency and impact of using model-based testing. Test cases for the targeted model of the application under test were generated on a real device. The experimental results indicate that our framework reduced the time required to execute all the generated test cases by 50%. Experimental setup and results are reported herein

    The perceived usability of automated testing tools for mobile applications

    Get PDF
    Mobile application development is a fast-emerging area in software development.The testing of mobile applications is very significant and there are a lot of tools available for testing such applications particularly for Android and iOS.This paper presents the most frequently used automated testing tools for mobile applications.In this study, we found that Android app developers used automated testing tools such as JUnit, MonkeyTalk, Robotium, Appium, and Robolectric.However, they often prefer to test their apps manually, whereas Windows apps developers prefer to use in-house tools such as Visual Studio and Microsoft Test Manager.Both Android and Windows apps developers face many challenges such as time constraints, compatibility issues, lack of exposure, and cumbersome tools, etc. Software testing tools are the key assets of a project that can help improve productivity and software quality.A survey method was used to assess the perceived usability of automated testing tools using forty (40) respondents as participants.The result indicates that JUnit has the highest perceived usability.The study’s result will profit practitioners and researchers in the research and development of usable automated testing tools for mobile applications

    Sapienz: Multi-objective automated testing for android applications

    Get PDF
    We introduce Sapienz, an approach to Android testing that uses multi-objective search-based testing to automatically explore and optimise test sequences, minimising length, while simultaneously maximising coverage and fault revelation. Sapienz combines random fuzzing, systematic and search-based exploration, exploiting seeding and multi-level instrumentation. Sapienz significantly outperforms (with large effect size) both the state-of-the-art technique Dynodroid and the widely-used tool, Android Monkey, in 7/10 experiments for coverage, 7/10 for fault detection and 10/10 for fault-revealing sequence length. When applied to the top 1, 000 Google Play apps, Sapienz found 558 unique, previously unknown crashes. So far we have managed to make contact with the developers of 27 crashing apps. Of these, 14 have confirmed that the crashes are caused by real faults. Of those 14, six already have developer-confirmed fixes

    Automated Testing of Android Apps: A Systematic Literature Review

    Get PDF
    Automated testing of Android apps is essential for app users, app developers and market maintainer communities alike. Given the widespread adoption of Android and the specificities of its development model, the literature has proposed various testing approaches for ensuring that not only functional requirements but also non-functional requirements are satisfied. In this paper, we aim at providing a clear overview of the state-of-the-art works around the topic of Android app testing, in an attempt to highlight the main trends, pinpoint the main methodologies applied and enumerate the challenges faced by the Android testing approaches as well as the directions where the community effort is still needed. To this end, we conduct a Systematic Literature Review (SLR) during which we eventually identified 103 relevant research papers published in leading conferences and journals until 2016. Our thorough examination of the relevant literature has led to several findings and highlighted the challenges that Android testing researchers should strive to address in the future. After that, we further propose a few concrete research directions where testing approaches are needed to solve recurrent issues in app updates, continuous increases of app sizes, as well as the Android ecosystem fragmentation

    Testing Nearby Peer-to-Peer Mobile Apps at Large

    Get PDF
    International audienceWhile mobile devices are widely adopted across the population, most of their remote interactions usually go through Internet. However, this indirect interaction model can be assumed as inefficient and sensitive when considering communications with neighboring devices. To leverage such weaknesses, nearby peer-to-peer(P2P) communications are now included in mobile devices to enable device-to-device communications over standard wireless technologies (WiFi, Bluetooth). While this capability supports the design of collaborative whiteboards, local multiplayer gaming, multi-screen gaming, offline file transfers and many other applications, mobile apps using P2P are still suffering from app crashes, battery issues, and bad user reviews and ranking in app stores. We believe that this lack of quality can be partly attributed to the lack of tool support for testing P2P mobile apps at large. In this paper, we introduce a testing framework that allows developers to automate reproducible testing of nearby P2P apps. Beyond the identification of P2P-related bugs, our approach also helps to estimate the discovery settings to optimize the deployment of P2P apps
    corecore