5,257 research outputs found

    Translating Video Recordings of Mobile App Usages into Replayable Scenarios

    Full text link
    Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing \approx 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 page

    The Need to Support of Data Flow Graph Visualization of Forensic Lucid Programs, Forensic Evidence, and their Evaluation by GIPSY

    Full text link
    Lucid programs are data-flow programs and can be visually represented as data flow graphs (DFGs) and composed visually. Forensic Lucid, a Lucid dialect, is a language to specify and reason about cyberforensic cases. It includes the encoding of the evidence (representing the context of evaluation) and the crime scene modeling in order to validate claims against the model and perform event reconstruction, potentially within large swaths of digital evidence. To aid investigators to model the scene and evaluate it, instead of typing a Forensic Lucid program, we propose to expand the design and implementation of the Lucid DFG programming onto Forensic Lucid case modeling and specification to enhance the usability of the language and the system and its behavior. We briefly discuss the related work on visual programming an DFG modeling in an attempt to define and select one approach or a composition of approaches for Forensic Lucid based on various criteria such as previous implementation, wide use, formal backing in terms of semantics and translation. In the end, we solicit the readers' constructive, opinions, feedback, comments, and recommendations within the context of this short discussion.Comment: 11 pages, 7 figures, index; extended abstract presented at VizSec'10 at http://www.vizsec2010.org/posters ; short paper accepted at PST'1

    UI-Design driven model-based testing

    Get PDF
    Testing interactive systems is notoriously difficult. Not only do we need to ensure that the functionality of the developed system is correct with respect to the requirements and specifications, we also need to ensure that the user interface to the system is correct (enables a user to access the functionality correctly) and is usable. These different requirements of interactive system testing are not easily combined within a single testing strategy. We investigate the use of models of interactive systems, which have been derived from design artefacts, as the basis for generating tests for an implemented system. We give a model-based method for testing interactive systems which has low overhead in terms of the models required and which enables testing of UI and system functionality from the perspective of user interaction

    Automated Test Input Generation for Android: Are We There Yet?

    Full text link
    Mobile applications, often simply called "apps", are increasingly widespread, and we use them daily to perform a number of activities. Like all software, apps must be adequately tested to gain confidence that they behave correctly. Therefore, in recent years, researchers and practitioners alike have begun to investigate ways to automate apps testing. In particular, because of Android's open source nature and its large share of the market, a great deal of research has been performed on input generation techniques for apps that run on the Android operating systems. At this point in time, there are in fact a number of such techniques in the literature, which differ in the way they generate inputs, the strategy they use to explore the behavior of the app under test, and the specific heuristics they use. To better understand the strengths and weaknesses of these existing approaches, and get general insight on ways they could be made more effective, in this paper we perform a thorough comparison of the main existing test input generation tools for Android. In our comparison, we evaluate the effectiveness of these tools, and their corresponding techniques, according to four metrics: code coverage, ability to detect faults, ability to work on multiple platforms, and ease of use. Our results provide a clear picture of the state of the art in input generation for Android apps and identify future research directions that, if suitably investigated, could lead to more effective and efficient testing tools for Android

    Towards a catalog of usability smells

    Get PDF
    This paper presents a catalog of smells in the context of interactive applications. These so-called usability smells are indicators of poor design on an application's user interface, with the potential to hinder not only its usability but also its maintenance and evolution. To eliminate such usability smells we discuss a set of program/usability refactorings. In order to validate the presented usability smells catalog, and the associated refactorings, we present a preliminary empirical study with software developers in the context of a real open source hospital management application. Moreover, a tool that computes graphical user interface behavior models, giving the applications' source code, is used to automatically detect usability smells at the model level.This work was partially funded by the ERDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT (Portuguese Foundation for Science and Technology), within projects reference FCOMP-01-0124-FEDER-020484 (J. Saraiva) and FCOMP-01-0124-FEDER-020554 (J. C. Campos)

    Combining formal methods and functional strategies regarding the reverse engineering of interactive applications

    Get PDF
    Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI’s code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineer an abstract model of a user interface directly from the GUI’s legacy code. We also present results from a case study. These results are encouraging and give evidence that the goal of reverse engineering user interfaces can be met with more work on this technique.Fundação para a Ciência e a Tecnologia (FCT) Fundo Europeu de Desenvolvimento Regional (FEDER
    corecore