12,432 research outputs found
Automatically Discovering, Reporting and Reproducing Android Application Crashes
Mobile developers face unique challenges when detecting and reporting crashes
in apps due to their prevailing GUI event-driven nature and additional sources
of inputs (e.g., sensor readings). To support developers in these tasks, we
introduce a novel, automated approach called CRASHSCOPE. This tool explores a
given Android app using systematic input generation, according to several
strategies informed by static and dynamic analyses, with the intrinsic goal of
triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented
crash report containing screenshots, detailed crash reproduction steps, the
captured exception stack trace, and a fully replayable script that
automatically reproduces the crash on a target device(s). We evaluated
CRASHSCOPE's effectiveness in discovering crashes as compared to five
state-of-the-art Android input generation tools on 61 applications. The results
demonstrate that CRASHSCOPE performs about as well as current tools for
detecting crashes and provides more detailed fault information. Additionally,
in a study analyzing eight real-world Android app crashes, we found that
CRASHSCOPE's reports are easily readable and allow for reliable reproduction of
crashes by presenting more explicit information than human written reports.Comment: 12 pages, in Proceedings of 9th IEEE International Conference on
Software Testing, Verification and Validation (ICST'16), Chicago, IL, April
10-15, 2016, pp. 33-4
Automated Test Input Generation for Android: Are We There Yet?
Mobile applications, often simply called "apps", are increasingly widespread,
and we use them daily to perform a number of activities. Like all software,
apps must be adequately tested to gain confidence that they behave correctly.
Therefore, in recent years, researchers and practitioners alike have begun to
investigate ways to automate apps testing. In particular, because of Android's
open source nature and its large share of the market, a great deal of research
has been performed on input generation techniques for apps that run on the
Android operating systems. At this point in time, there are in fact a number of
such techniques in the literature, which differ in the way they generate
inputs, the strategy they use to explore the behavior of the app under test,
and the specific heuristics they use. To better understand the strengths and
weaknesses of these existing approaches, and get general insight on ways they
could be made more effective, in this paper we perform a thorough comparison of
the main existing test input generation tools for Android. In our comparison,
we evaluate the effectiveness of these tools, and their corresponding
techniques, according to four metrics: code coverage, ability to detect faults,
ability to work on multiple platforms, and ease of use. Our results provide a
clear picture of the state of the art in input generation for Android apps and
identify future research directions that, if suitably investigated, could lead
to more effective and efficient testing tools for Android
Target Directed Event Sequence Generation for Android Applications
Testing is a commonly used approach to ensure the quality of software, of
which model-based testing is a hot topic to test GUI programs such as Android
applications (apps). Existing approaches mainly either dynamically construct a
model that only contains the GUI information, or build a model in the view of
code that may fail to describe the changes of GUI widgets during runtime.
Besides, most of these models do not support back stack that is a particular
mechanism of Android. Therefore, this paper proposes a model LATTE that is
constructed dynamically with consideration of the view information in the
widgets as well as the back stack, to describe the transition between GUI
widgets. We also propose a label set to link the elements of the LATTE model to
program snippets. The user can define a subset of the label set as a target for
the testing requirements that need to cover some specific parts of the code. To
avoid the state explosion problem during model construction, we introduce a
definition "state similarity" to balance the model accuracy and analysis cost.
Based on this model, a target directed test generation method is presented to
generate event sequences to effectively cover the target. The experiments on
several real-world apps indicate that the generated test cases based on LATTE
can reach a high coverage, and with the model we can generate the event
sequences to cover a given target with short event sequences
The liminality of trajectory shifts in institutional entrepreneurship
In this paper, we develop a process model of trajectory shifts in institutional entrepreneurship. We focus on the liminal periods experienced by institutional entrepreneurs when they, unlike the rest of the organization, recognize limits in the present and seek to shift a familiar past into an unfamiliar and uncertain future. Such periods involve a situation where the new possible future, not yet fully formed, exists side-by-side with established innovation trajectories. Trajectory shifts are moments of truth for institutional entrepreneurs, but little is known about the underlying mechanisms of how entrepreneurs reflectively deal with liminality to conceive and bring forth new innovation trajectories. Our in-depth case study research at CarCorp traces three such mechanisms (reflective dissension, imaginative projection, and eliminatory exploration) and builds the basis for understanding the liminality of trajectory shifts. The paper offers theoretical implications for the institutional entrepreneurship literature
Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development
Mobile devices and platforms have become an established target for modern
software developers due to performant hardware and a large and growing user
base numbering in the billions. Despite their popularity, the software
development process for mobile apps comes with a set of unique, domain-specific
challenges rooted in program comprehension. Many of these challenges stem from
developer difficulties in reasoning about different representations of a
program, a phenomenon we define as a "language dichotomy". In this paper, we
reflect upon the various language dichotomies that contribute to open problems
in program comprehension and development for mobile apps. Furthermore, to help
guide the research community towards effective solutions for these problems, we
provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference
on Program Comprehension (ICPC'18
Scripted GUI Testing of Android Apps: A Study on Diffusion, Evolution and Fragility
Background. Evidence suggests that mobile applications are not thoroughly
tested as their desktop counterparts. In particular GUI testing is generally
limited. Like web-based applications, mobile apps suffer from GUI test
fragility, i.e. GUI test classes failing due to minor modifications in the GUI,
without the application functionalities being altered.
Aims. The objective of our study is to examine the diffusion of GUI testing
on Android, and the amount of changes required to keep test classes up to date,
and in particular the changes due to GUI test fragility. We define metrics to
characterize the modifications and evolution of test classes and test methods,
and proxies to estimate fragility-induced changes.
Method. To perform our experiments, we selected six widely used open-source
tools for scripted GUI testing of mobile applications previously described in
the literature. We have mined the repositories on GitHub that used those tools,
and computed our set of metrics.
Results. We found that none of the considered GUI testing frameworks achieved
a major diffusion among the open-source Android projects available on GitHub.
For projects with GUI tests, we found that test suites have to be modified
often, specifically 5\%-10\% of developers' modified LOCs belong to tests, and
that a relevant portion (60\% on average) of such modifications are induced by
fragility.
Conclusions. Fragility of GUI test classes constitute a relevant concern,
possibly being an obstacle for developers to adopt automated scripted GUI
tests. This first evaluation and measure of fragility of Android scripted GUI
testing can constitute a benchmark for developers, and the basis for the
definition of a taxonomy of fragility causes, and actionable guidelines to
mitigate the issue.Comment: PROMISE'17 Conference, Best Paper Awar
- …