10,123 research outputs found
Understanding the Test Automation Culture of App Developers
Abstract—Smartphone applications (apps) have gained pop-ularity recently. Millions of smartphone applications (apps) are available on different app stores which gives users plethora of options to choose from, however, it also raises concern if these apps are adequately tested before they are released for public use. In this study, we want to understand the test automation culture prevalent among app developers. Specifically, we want to examine the current state of testing of apps, the tools that are commonly used by app developers, and the problems faced by them. To get an insight on the test automation culture, we conduct two different studies. In the first study, we analyse over 600 Android apps collected from F-Droid, one of the largest repositories containing information about open-source Android apps. We check for the presence of test cases and calculate code coverage to measure the adequacy of testing in these apps. We also survey developers who have hosted their applications on GitHub to understand the testing practices followed by them. We ask developers about the tools that they use and “pain points” that they face while testing Android apps. For the second study, based on the responses from Android developers, we improve our survey questions and resend it to Windows app developers within Microsoft. We conclude that many Android apps are poorly tested- only about 14 % of the apps contain test cases and only about 9 % of the apps that have executable test cases have coverage above 40%. Also, we find that Android app developers use automated testing tools such as JUnit, Monkeyrunner, Robotium, and Robolectric, however, they often prefer to test their apps manually, whereas Windows app developers prefer to use in-house tools such as Visual Studio and Microsoft Test Manager. Both Android and Windows app developers face many challenges such as time constraints, compatibility issues, lack of exposure, cumbersome tools, etc. We give suggestions to improve the test automation culture in the growing app community
Security Evaluation of Cyber-Physical Systems in Society- Critical Internet of Things
In this paper, we present evaluation of security
awareness of developers and users of cyber-physical systems. Our
study includes interviews, workshops, surveys and one practical
evaluation. We conducted 15 interviews and conducted survey with
55 respondents coming primarily from industry. Furthermore, we
performed practical evaluation of current state of practice for a
society-critical application, a commercial vehicle, and reconfirmed
our findings discussing an attack vector for an off-line societycritical
facility. More work is necessary to increase usage of security
strategies, available methods, processes and standards. The security
information, currently often insufficient, should be provided in the
user manuals of products and services to protect system users. We
confirmed it lately when we conducted an additional survey of
users, with users feeling as left out in their quest for own security
and privacy. Finally, hardware-related security questions begin to
come up on the agenda, with a general increase of interest and
awareness of hardware contribution to the overall cyber-physical
security. At the end of this paper we discuss possible
countermeasures for dealing with threats in infrastructures,
highlighting the role of authorities in this quest
FraudDroid: Automated Ad Fraud Detection for Android Apps
Although mobile ad frauds have been widespread, state-of-the-art approaches
in the literature have mainly focused on detecting the so-called static
placement frauds, where only a single UI state is involved and can be
identified based on static information such as the size or location of ad
views. Other types of fraud exist that involve multiple UI states and are
performed dynamically while users interact with the app. Such dynamic
interaction frauds, although now widely spread in apps, have not yet been
explored nor addressed in the literature. In this work, we investigate a wide
range of mobile ad frauds to provide a comprehensive taxonomy to the research
community. We then propose, FraudDroid, a novel hybrid approach to detect ad
frauds in mobile Android apps. FraudDroid analyses apps dynamically to build UI
state transition graphs and collects their associated runtime network traffics,
which are then leveraged to check against a set of heuristic-based rules for
identifying ad fraudulent behaviours. We show empirically that FraudDroid
detects ad frauds with a high precision (93%) and recall (92%). Experimental
results further show that FraudDroid is capable of detecting ad frauds across
the spectrum of fraud types. By analysing 12,000 ad-supported Android apps,
FraudDroid identified 335 cases of fraud associated with 20 ad networks that
are further confirmed to be true positive results and are shared with our
fellow researchers to promote advanced ad fraud detectionComment: 12 pages, 10 figure
Automatically Discovering, Reporting and Reproducing Android Application Crashes
Mobile developers face unique challenges when detecting and reporting crashes
in apps due to their prevailing GUI event-driven nature and additional sources
of inputs (e.g., sensor readings). To support developers in these tasks, we
introduce a novel, automated approach called CRASHSCOPE. This tool explores a
given Android app using systematic input generation, according to several
strategies informed by static and dynamic analyses, with the intrinsic goal of
triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented
crash report containing screenshots, detailed crash reproduction steps, the
captured exception stack trace, and a fully replayable script that
automatically reproduces the crash on a target device(s). We evaluated
CRASHSCOPE's effectiveness in discovering crashes as compared to five
state-of-the-art Android input generation tools on 61 applications. The results
demonstrate that CRASHSCOPE performs about as well as current tools for
detecting crashes and provides more detailed fault information. Additionally,
in a study analyzing eight real-world Android app crashes, we found that
CRASHSCOPE's reports are easily readable and allow for reliable reproduction of
crashes by presenting more explicit information than human written reports.Comment: 12 pages, in Proceedings of 9th IEEE International Conference on
Software Testing, Verification and Validation (ICST'16), Chicago, IL, April
10-15, 2016, pp. 33-4
Scripted GUI Testing of Android Apps: A Study on Diffusion, Evolution and Fragility
Background. Evidence suggests that mobile applications are not thoroughly
tested as their desktop counterparts. In particular GUI testing is generally
limited. Like web-based applications, mobile apps suffer from GUI test
fragility, i.e. GUI test classes failing due to minor modifications in the GUI,
without the application functionalities being altered.
Aims. The objective of our study is to examine the diffusion of GUI testing
on Android, and the amount of changes required to keep test classes up to date,
and in particular the changes due to GUI test fragility. We define metrics to
characterize the modifications and evolution of test classes and test methods,
and proxies to estimate fragility-induced changes.
Method. To perform our experiments, we selected six widely used open-source
tools for scripted GUI testing of mobile applications previously described in
the literature. We have mined the repositories on GitHub that used those tools,
and computed our set of metrics.
Results. We found that none of the considered GUI testing frameworks achieved
a major diffusion among the open-source Android projects available on GitHub.
For projects with GUI tests, we found that test suites have to be modified
often, specifically 5\%-10\% of developers' modified LOCs belong to tests, and
that a relevant portion (60\% on average) of such modifications are induced by
fragility.
Conclusions. Fragility of GUI test classes constitute a relevant concern,
possibly being an obstacle for developers to adopt automated scripted GUI
tests. This first evaluation and measure of fragility of Android scripted GUI
testing can constitute a benchmark for developers, and the basis for the
definition of a taxonomy of fragility causes, and actionable guidelines to
mitigate the issue.Comment: PROMISE'17 Conference, Best Paper Awar
DevOps for Digital Leaders
DevOps; continuous delivery; software lifecycle; concurrent parallel testing; service management; ITIL; GRC; PaaS; containerization; API management; lean principles; technical debt; end-to-end automation; automatio
Store submission automation : effects of user centred design on organizational learning
In this thesis, we study an automation tool implemented using user-centred design- paradigm. The aim of this thesis is to study how the user-centred design affects organisational learning. The tool is used for uploading application packages and marketing assets to the Apple and Google digital distribution services.
User-centred design focuses on understanding user’s tasks and requirements. Organisational learning is used to describe the learning that happens inside an organisation on individual or group level, which helps the organisation to accumulate long lasting knowledge. In the initial literary search, there was no earlier research focusing on this particular question. In this thesis, we will go through in detail the organisational structure, the requirements set for the digital distribution services and the implementation of the automation tool for this case. This will enable us to scrutinize the interviews and results in this context.
This thesis was carried out using qualitative research methodology. Interviews were conducted with users from the company for which the tool was implemented. These interviews seem to bear no strong correlation between organisational learning and user- centric design. However, the results indicate for the research question that further inspection to the subject could be worthwhile.Tässä tutkielmassa perehdytään yrityksen sisäiseen käyttäjäkeskeisesti suunniteltuun automaatiotyökaluun, jonka tarkoituksena on lähettää sovellustiedostoja ja markkinointiassetteja Applen sekä Googlen digitaalisiin sisältöpalveluihin. Tarkoituksena on selvittää, onko käyttäjkeskeisellä suunnittelulla vaikutusta organisaation oppimiseen.
Käyttäjäkeskeisessä suunnittelussa lähtökohtana on käyttäjien tarpeiden ymmärtäminen sekä vaatimusten määrittely. Organisaation oppiminen puolestaan viittaa organisaation sisällä tapahtuvaan informaation tai osaamisen kasvamiseen, joko ryhmässä tai henkilökohtaisella tasolla. Näiden kahden kohtaamisesta ei lähdehaun perusteella löytynyt aikaisempaa tutkimusta. Tutkielmassa käydään läpi yksityiskohtaisesti organisaation rakenne, digitaalisten sisältöpalveluiden asettamat vaatimukset sekä miten toteutettiin työkalu vastaamaan tämän luomaa kysyntää, jotta aiheen ja haastatteluiden tulkitseminen tässä kontekstissa olisi mahdollisimman yksityiskohtaista.
Tämä tutkielma käyttää lähtökohtanaan laadullisen tutkimuksen menetelmäsuuntausta. Tutkimuksen yhteydessä haastateltiin käyttäjiä yrityksestä, jolle kyseinen työkalu toteutettiin. Haastatteluiden perusteella ei löydetty suoraa korrelaatiota käyttäjäkeskeisen suunnittelun ja organisaation oppimisen välillä. Pieniä viitteitä niiden välillä kuitenkin löytyi, jonka perusteella lisätutkimus aiheesta olisi paikallaan
DevOps for Digital Leaders
DevOps; continuous delivery; software lifecycle; concurrent parallel testing; service management; ITIL; GRC; PaaS; containerization; API management; lean principles; technical debt; end-to-end automation; automatio
- …