22 research outputs found
Who you gonna call? Analyzing Web Requests in Android Applications
Relying on ubiquitous Internet connectivity, applications on mobile devices
frequently perform web requests during their execution. They fetch data for
users to interact with, invoke remote functionalities, or send user-generated
content or meta-data. These requests collectively reveal common practices of
mobile application development, like what external services are used and how,
and they point to possible negative effects like security and privacy
violations, or impacts on battery life. In this paper, we assess different ways
to analyze what web requests Android applications make. We start by presenting
dynamic data collected from running 20 randomly selected Android applications
and observing their network activity. Next, we present a static analysis tool,
Stringoid, that analyzes string concatenations in Android applications to
estimate constructed URL strings. Using Stringoid, we extract URLs from 30, 000
Android applications, and compare the performance with a simpler constant
extraction analysis. Finally, we present a discussion of the advantages and
limitations of dynamic and static analyses when extracting URLs, as we compare
the data extracted by Stringoid from the same 20 applications with the
dynamically collected data
Automatically Discovering, Reporting and Reproducing Android Application Crashes
Mobile developers face unique challenges when detecting and reporting crashes
in apps due to their prevailing GUI event-driven nature and additional sources
of inputs (e.g., sensor readings). To support developers in these tasks, we
introduce a novel, automated approach called CRASHSCOPE. This tool explores a
given Android app using systematic input generation, according to several
strategies informed by static and dynamic analyses, with the intrinsic goal of
triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented
crash report containing screenshots, detailed crash reproduction steps, the
captured exception stack trace, and a fully replayable script that
automatically reproduces the crash on a target device(s). We evaluated
CRASHSCOPE's effectiveness in discovering crashes as compared to five
state-of-the-art Android input generation tools on 61 applications. The results
demonstrate that CRASHSCOPE performs about as well as current tools for
detecting crashes and provides more detailed fault information. Additionally,
in a study analyzing eight real-world Android app crashes, we found that
CRASHSCOPE's reports are easily readable and allow for reliable reproduction of
crashes by presenting more explicit information than human written reports.Comment: 12 pages, in Proceedings of 9th IEEE International Conference on
Software Testing, Verification and Validation (ICST'16), Chicago, IL, April
10-15, 2016, pp. 33-4
How do Developers Test Android Applications?
Enabling fully automated testing of mobile applications has recently become
an important topic of study for both researchers and practitioners. A plethora
of tools and approaches have been proposed to aid mobile developers both by
augmenting manual testing practices and by automating various parts of the
testing process. However, current approaches for automated testing fall short
in convincing developers about their benefits, leading to a majority of mobile
testing being performed manually. With the goal of helping researchers and
practitioners - who design approaches supporting mobile testing - to understand
developer's needs, we analyzed survey responses from 102 open source
contributors to Android projects about their practices when performing testing.
The survey focused on questions regarding practices and preferences of
developers/testers in-the-wild for (i) designing and generating test cases,
(ii) automated testing practices, and (iii) perceptions of quality metrics such
as code coverage for determining test quality. Analyzing the information
gleaned from this survey, we compile a body of knowledge to help guide
researchers and professionals toward tailoring new automated testing approaches
to the need of a diverse set of open source developers.Comment: 11 pages, accepted to the Proceedings of the 33rd IEEE International
Conference on Software Maintenance and Evolution (ICSME'17
Translating Video Recordings of Mobile App Usages into Replayable Scenarios
Screen recordings of mobile applications are easy to obtain and capture a
wealth of information pertinent to software developers (e.g., bugs or feature
requests), making them a popular mechanism for crowdsourced app feedback. Thus,
these videos are becoming a common artifact that developers must manage. In
light of unique mobile development constraints, including swift release cycles
and rapidly evolving platforms, automated techniques for analyzing all types of
rich software artifacts provide benefit to mobile developers. Unfortunately,
automatically analyzing screen recordings presents serious challenges, due to
their graphical nature, compared to other types of (textual) artifacts. To
address these challenges, this paper introduces V2S, a lightweight, automated
approach for translating video recordings of Android app usages into replayable
scenarios. V2S is based primarily on computer vision techniques and adapts
recent solutions for object detection and image classification to detect and
classify user actions captured in a video, and convert these into a replayable
test scenario. We performed an extensive evaluation of V2S involving 175 videos
depicting 3,534 GUI-based actions collected from users exercising features and
reproducing bugs from over 80 popular Android apps. Our results illustrate that
V2S can accurately replay scenarios from screen recordings, and is capable of
reproducing 89% of our collected videos with minimal overhead. A case
study with three industrial partners illustrates the potential usefulness of
V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software
Engineering (ICSE'20), 13 page