15 research outputs found
Automated Test Input Generation for Android: Are We There Yet?
Mobile applications, often simply called "apps", are increasingly widespread,
and we use them daily to perform a number of activities. Like all software,
apps must be adequately tested to gain confidence that they behave correctly.
Therefore, in recent years, researchers and practitioners alike have begun to
investigate ways to automate apps testing. In particular, because of Android's
open source nature and its large share of the market, a great deal of research
has been performed on input generation techniques for apps that run on the
Android operating systems. At this point in time, there are in fact a number of
such techniques in the literature, which differ in the way they generate
inputs, the strategy they use to explore the behavior of the app under test,
and the specific heuristics they use. To better understand the strengths and
weaknesses of these existing approaches, and get general insight on ways they
could be made more effective, in this paper we perform a thorough comparison of
the main existing test input generation tools for Android. In our comparison,
we evaluate the effectiveness of these tools, and their corresponding
techniques, according to four metrics: code coverage, ability to detect faults,
ability to work on multiple platforms, and ease of use. Our results provide a
clear picture of the state of the art in input generation for Android apps and
identify future research directions that, if suitably investigated, could lead
to more effective and efficient testing tools for Android
Cross-platform testing and maintenance of web and mobile applications
Modern software applications need to run on a variety of web and mobile platforms with diverse software and hardware-level features. Thus, developers of such software need to duplicate the testing and maintenance effort on a wide range of platforms. Often developers are not able to cope with this increasing demand and release software that is broken on certain platforms, thereby affecting a class of customers using such platforms. Hence, there is a need for automating such duplicate activities to assist the developer in coping with the ever increasing demand. The goal of my work is to improve the testing and maintenance of cross-platform web and mobile applications by developing automated techniques for comparing and matching the behavior of such applications across different platforms.
To achieve this goal, I have identified three problems that are relevant in the context of cross-platform testing and maintenance: 1) automated identification of inconsistencies in the same application's behavior across multiple platforms, 2) detecting features that are present in the application on one platform, but missing on another platform version of the same application, and, 3) automated migration of test suites and possibly other software artifacts across platforms. I present three different scenarios for the development of {cross-platform} web and mobile applications, and formulate each of the three problems in the scenario where it is most relevant. To address and mitigate these problems in their corresponding scenarios, I present the principled design, development and evaluation of the two techniques, and a third preliminary technique to highlight the research challenges of test migration. The first technique, X-pert identifies inconsistencies in a web application running on multiple web browsers. The second technique, FMAP matches features between the desktop and mobile versions of a web application and reports any features found missing on either of the platform versions. The final technique, MigraTest attempts to automatically migrate test cases from a mobile application on one platform to its counterpart on another platform.
To evaluate these techniques, I implemented them as prototype tools and ran these tools on real-world subject applications. The empirical evaluation of X-pert shows that it is accurate and effective in detecting real-world inconsistencies in web applications. In the case of FMAP, the results of my evaluation show that it was able to correctly identify missing features between desktop and mobile versions of the web applications considered, as confirmed by my analysis of user reports and software fixes for these applications. The third technique, MigraTest was able to efficiently migrate test cases between two mobile platform versions of the subject applications.Ph.D
A Cross-browser Web Application Testing Tool
Web applications have gained increased popularity in the past decade due to the ubiquity of the web browser across platforms. With the rapid evolution of web technologies, the complexity of web applications has also grown, making maintenance tasks harder. In particular, maintaining crossbrowser compliance is a challenging task for web developers, as they must test their application on a variety of browsers and platforms. Existing tools provide some support for this kind of test, but developers are still required to identify and fix crossbrowser issues mainly through manual inspection. Our WEBDIFF tool addresses the limitations of existing tools by (1) automatically comparing the structural and visual characteristics of web pages when they are rendered in different browsers, and (2) reporting potential differences to developers. When used on nine real web pages, WEBDIFF automatically identified 121 issues, out of which 100 were actual problems. In this demo, we will present WEBDIFF, its underlying technology, and several examples of its use on real applications
CROSSCHECK: Combining Crawling and Differencing To Better Detect Cross-browser Incompatibilities in Web Applications
Abstract—One of the consequences of the continuous and rapid evolution of web technologies is the amount of inconsistencies between web browsers implementations. Such inconsistencies can result in cross-browser incompatibilities (XBIs)—situations in which the same web application can behave differently when run on different browsers. In some cases, XBIs consist of tolerable cosmetic differences. In other cases, however, they may completely prevent users from accessing part of a web application’s functionality. Despite the prevalence of XBIs, there are hardly any tools that can help web developers detect and correct such issues. In fact, most existing approaches against XBIs involve a considerable amount of manual effort and are consequently extremely time consuming and error prone. In recent work, we have presented two complementary approaches, WEBDIFF and CROSST, for automatically detecting and reporting XBIs. In this paper, we present CROSSCHECK, a more powerful and comprehensive technique and tool for XBI detection that combines and adapts these two approaches in a way that leverages their respective strengths. The paper also presents an empirical evaluation of CROSSCHECK on a set of real-world web applications. The results of our experiments show that CROSSCHECK is both effective and efficient in detecting XBIs, and that it can outperform existing techniques. Keywords-web testing, dynamic analysis, machine learning. I