15,233 research outputs found
Supporting ethnographic studies of ubiquitous computing in the wild
Ethnography has become a staple feature of IT research over the last twenty years, shaping our understanding of the social character of computing systems and informing their design in a wide variety of settings. The emergence of ubiquitous computing raises new challenges for ethnography however, distributing interaction across a burgeoning array of small, mobile devices and online environments which exploit invisible sensing systems. Understanding interaction requires ethnographers to reconcile interactions that are, for example, distributed across devices on the street with online interactions in order to assemble coherent understandings of the social character and purchase of ubiquitous computing systems. We draw upon four recent studies to show how ethnographers are replaying system recordings of interaction alongside existing resources such as video recordings to do this and identify key challenges that need to be met to support ethnographic study of ubiquitous computing in the wild
Scripted GUI Testing of Android Apps: A Study on Diffusion, Evolution and Fragility
Background. Evidence suggests that mobile applications are not thoroughly
tested as their desktop counterparts. In particular GUI testing is generally
limited. Like web-based applications, mobile apps suffer from GUI test
fragility, i.e. GUI test classes failing due to minor modifications in the GUI,
without the application functionalities being altered.
Aims. The objective of our study is to examine the diffusion of GUI testing
on Android, and the amount of changes required to keep test classes up to date,
and in particular the changes due to GUI test fragility. We define metrics to
characterize the modifications and evolution of test classes and test methods,
and proxies to estimate fragility-induced changes.
Method. To perform our experiments, we selected six widely used open-source
tools for scripted GUI testing of mobile applications previously described in
the literature. We have mined the repositories on GitHub that used those tools,
and computed our set of metrics.
Results. We found that none of the considered GUI testing frameworks achieved
a major diffusion among the open-source Android projects available on GitHub.
For projects with GUI tests, we found that test suites have to be modified
often, specifically 5\%-10\% of developers' modified LOCs belong to tests, and
that a relevant portion (60\% on average) of such modifications are induced by
fragility.
Conclusions. Fragility of GUI test classes constitute a relevant concern,
possibly being an obstacle for developers to adopt automated scripted GUI
tests. This first evaluation and measure of fragility of Android scripted GUI
testing can constitute a benchmark for developers, and the basis for the
definition of a taxonomy of fragility causes, and actionable guidelines to
mitigate the issue.Comment: PROMISE'17 Conference, Best Paper Awar
Automatically Discovering, Reporting and Reproducing Android Application Crashes
Mobile developers face unique challenges when detecting and reporting crashes
in apps due to their prevailing GUI event-driven nature and additional sources
of inputs (e.g., sensor readings). To support developers in these tasks, we
introduce a novel, automated approach called CRASHSCOPE. This tool explores a
given Android app using systematic input generation, according to several
strategies informed by static and dynamic analyses, with the intrinsic goal of
triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented
crash report containing screenshots, detailed crash reproduction steps, the
captured exception stack trace, and a fully replayable script that
automatically reproduces the crash on a target device(s). We evaluated
CRASHSCOPE's effectiveness in discovering crashes as compared to five
state-of-the-art Android input generation tools on 61 applications. The results
demonstrate that CRASHSCOPE performs about as well as current tools for
detecting crashes and provides more detailed fault information. Additionally,
in a study analyzing eight real-world Android app crashes, we found that
CRASHSCOPE's reports are easily readable and allow for reliable reproduction of
crashes by presenting more explicit information than human written reports.Comment: 12 pages, in Proceedings of 9th IEEE International Conference on
Software Testing, Verification and Validation (ICST'16), Chicago, IL, April
10-15, 2016, pp. 33-4
Out-Of-Place debugging: a debugging architecture to reduce debugging interference
Context. Recent studies show that developers spend most of their programming
time testing, verifying and debugging software. As applications become more and
more complex, developers demand more advanced debugging support to ease the
software development process.
Inquiry. Since the 70's many debugging solutions were introduced. Amongst
them, online debuggers provide a good insight on the conditions that led to a
bug, allowing inspection and interaction with the variables of the program.
However, most of the online debugging solutions introduce \textit{debugging
interference} to the execution of the program, i.e. pauses, latency, and
evaluation of code containing side-effects.
Approach. This paper investigates a novel debugging technique called
\outofplace debugging. The goal is to minimize the debugging interference
characteristic of online debugging while allowing online remote capabilities.
An \outofplace debugger transfers the program execution and application state
from the debugged application to the debugger application, both running in
different processes.
Knowledge. On the one hand, \outofplace debugging allows developers to debug
applications remotely, overcoming the need of physical access to the machine
where the debugged application is running. On the other hand, debugging happens
locally on the remote machine avoiding latency. That makes it suitable to be
deployed on a distributed system and handle the debugging of several processes
running in parallel.
Grounding. We implemented a concrete out-of-place debugger for the Pharo
Smalltalk programming language. We show that our approach is practical by
performing several benchmarks, comparing our approach with a classic remote
online debugger. We show that our prototype debugger outperforms by a 1000
times a traditional remote debugger in several scenarios. Moreover, we show
that the presence of our debugger does not impact the overall performance of an
application.
Importance. This work combines remote debugging with the debugging experience
of a local online debugger. Out-of-place debugging is the first online
debugging technique that can minimize debugging interference while debugging a
remote application. Yet, it still keeps the benefits of online debugging ( e.g.
step-by-step execution). This makes the technique suitable for modern
applications which are increasingly parallel, distributed and reactive to
streams of data from various sources like sensors, UI, network, etc
- âŠ