3,215 research outputs found
Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development
Mobile devices and platforms have become an established target for modern
software developers due to performant hardware and a large and growing user
base numbering in the billions. Despite their popularity, the software
development process for mobile apps comes with a set of unique, domain-specific
challenges rooted in program comprehension. Many of these challenges stem from
developer difficulties in reasoning about different representations of a
program, a phenomenon we define as a "language dichotomy". In this paper, we
reflect upon the various language dichotomies that contribute to open problems
in program comprehension and development for mobile apps. Furthermore, to help
guide the research community towards effective solutions for these problems, we
provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference
on Program Comprehension (ICPC'18
How do Developers Test Android Applications?
Enabling fully automated testing of mobile applications has recently become
an important topic of study for both researchers and practitioners. A plethora
of tools and approaches have been proposed to aid mobile developers both by
augmenting manual testing practices and by automating various parts of the
testing process. However, current approaches for automated testing fall short
in convincing developers about their benefits, leading to a majority of mobile
testing being performed manually. With the goal of helping researchers and
practitioners - who design approaches supporting mobile testing - to understand
developer's needs, we analyzed survey responses from 102 open source
contributors to Android projects about their practices when performing testing.
The survey focused on questions regarding practices and preferences of
developers/testers in-the-wild for (i) designing and generating test cases,
(ii) automated testing practices, and (iii) perceptions of quality metrics such
as code coverage for determining test quality. Analyzing the information
gleaned from this survey, we compile a body of knowledge to help guide
researchers and professionals toward tailoring new automated testing approaches
to the need of a diverse set of open source developers.Comment: 11 pages, accepted to the Proceedings of the 33rd IEEE International
Conference on Software Maintenance and Evolution (ICSME'17
Translating Video Recordings of Mobile App Usages into Replayable Scenarios
Screen recordings of mobile applications are easy to obtain and capture a
wealth of information pertinent to software developers (e.g., bugs or feature
requests), making them a popular mechanism for crowdsourced app feedback. Thus,
these videos are becoming a common artifact that developers must manage. In
light of unique mobile development constraints, including swift release cycles
and rapidly evolving platforms, automated techniques for analyzing all types of
rich software artifacts provide benefit to mobile developers. Unfortunately,
automatically analyzing screen recordings presents serious challenges, due to
their graphical nature, compared to other types of (textual) artifacts. To
address these challenges, this paper introduces V2S, a lightweight, automated
approach for translating video recordings of Android app usages into replayable
scenarios. V2S is based primarily on computer vision techniques and adapts
recent solutions for object detection and image classification to detect and
classify user actions captured in a video, and convert these into a replayable
test scenario. We performed an extensive evaluation of V2S involving 175 videos
depicting 3,534 GUI-based actions collected from users exercising features and
reproducing bugs from over 80 popular Android apps. Our results illustrate that
V2S can accurately replay scenarios from screen recordings, and is capable of
reproducing 89% of our collected videos with minimal overhead. A case
study with three industrial partners illustrates the potential usefulness of
V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software
Engineering (ICSE'20), 13 page
Self-Control in Cyberspace: Applying Dual Systems Theory to a Review of Digital Self-Control Tools
Many people struggle to control their use of digital devices. However, our
understanding of the design mechanisms that support user self-control remains
limited. In this paper, we make two contributions to HCI research in this
space: first, we analyse 367 apps and browser extensions from the Google Play,
Chrome Web, and Apple App stores to identify common core design features and
intervention strategies afforded by current tools for digital self-control.
Second, we adapt and apply an integrative dual systems model of self-regulation
as a framework for organising and evaluating the design features found. Our
analysis aims to help the design of better tools in two ways: (i) by
identifying how, through a well-established model of self-regulation, current
tools overlap and differ in how they support self-control; and (ii) by using
the model to reveal underexplored cognitive mechanisms that could aid the
design of new tools.Comment: 11.5 pages (excl. references), 6 figures, 1 tabl
App Review Driven Collaborative Bug Finding
Software development teams generally welcome any effort to expose bugs in
their code base. In this work, we build on the hypothesis that mobile apps from
the same category (e.g., two web browser apps) may be affected by similar bugs
in their evolution process. It is therefore possible to transfer the experience
of one historical app to quickly find bugs in its new counterparts. This has
been referred to as collaborative bug finding in the literature. Our novelty is
that we guide the bug finding process by considering that existing bugs have
been hinted within app reviews. Concretely, we design the BugRMSys approach to
recommend bug reports for a target app by matching historical bug reports from
apps in the same category with user app reviews of the target app. We
experimentally show that this approach enables us to quickly expose and report
dozens of bugs for targeted apps such as Brave (web browser app). BugRMSys's
implementation relies on DistilBERT to produce natural language text
embeddings. Our pipeline considers similarities between bug reports and app
reviews to identify relevant bugs. We then focus on the app review as well as
potential reproduction steps in the historical bug report (from a same-category
app) to reproduce the bugs.
Overall, after applying BugRMSys to six popular apps, we were able to
identify, reproduce and report 20 new bugs: among these, 9 reports have been
already triaged, 6 were confirmed, and 4 have been fixed by official
development teams, respectively
Automating Software Development for Mobile Computing Platforms
Mobile devices such as smartphones and tablets have become ubiquitous in today\u27s computing landscape. These devices have ushered in entirely new populations of users, and mobile operating systems are now outpacing more traditional desktop systems in terms of market share. The applications that run on these mobile devices (often referred to as apps ) have become a primary means of computing for millions of users and, as such, have garnered immense developer interest. These apps allow for unique, personal software experiences through touch-based UIs and a complex assortment of sensors. However, designing and implementing high quality mobile apps can be a difficult process. This is primarily due to challenges unique to mobile development including change-prone APIs and platform fragmentation, just to name a few. in this dissertation we develop techniques that aid developers in overcoming these challenges by automating and improving current software design and testing practices for mobile apps. More specifically, we first introduce a technique, called Gvt, that improves the quality of graphical user interfaces (GUIs) for mobile apps by automatically detecting instances where a GUI was not implemented to its intended specifications. Gvt does this by constructing hierarchal models of mobile GUIs from metadata associated with both graphical mock-ups (i.e., created by designers using photo-editing software) and running instances of the GUI from the corresponding implementation. Second, we develop an approach that completely automates prototyping of GUIs for mobile apps. This approach, called ReDraw, is able to transform an image of a mobile app GUI into runnable code by detecting discrete GUI-components using computer vision techniques, classifying these components into proper functional categories (e.g., button, dropdown menu) using a Convolutional Neural Network (CNN), and assembling these components into realistic code. Finally, we design a novel approach for automated testing of mobile apps, called CrashScope, that explores a given android app using systematic input generation with the intrinsic goal of triggering crashes. The GUI-based input generation engine is driven by a combination of static and dynamic analyses that create a model of an app\u27s GUI and targets common, empirically derived root causes of crashes in android apps. We illustrate that the techniques presented in this dissertation represent significant advancements in mobile development processes through a series of empirical investigations, user studies, and industrial case studies that demonstrate the effectiveness of these approaches and the benefit they provide developers
- …