5,249 research outputs found

    Translating Video Recordings of Mobile App Usages into Replayable Scenarios

    Full text link
    Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing ≈\approx 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 page

    Stabilising touch interactions in cockpits, aerospace, and vibrating environments

    Get PDF
    © Springer International Publishing AG, part of Springer Nature 2018. Incorporating touch screen interaction into cockpit flight systems is increasingly gaining traction given its several potential advantages to design as well as usability to pilots. However, perturbations to the user input are prevalent in such environments due to vibrations, turbulence and high accelerations. This poses particular challenges for interacting with displays in the cockpit, for example, accidental activation during turbulence or high levels of distraction from the primary task of airplane control to accomplish selection tasks. On the other hand, predictive displays have emerged as a solution to minimize the effort as well as cognitive, visual and physical workload associated with using in-vehicle displays under perturbations, induced by road and driving conditions. This technology employs gesture tracking in 3D and potentially eye-gaze as well as other sensory data to substantially facilitate the acquisition (pointing and selection) of an interface component by predicting the item the user intents to select on the display, early in the movements towards the screen. A key aspect is utilising principled Bayesian modelling to incorporate and treat the present perturbation, thus, it is a software-based solution that showed promising results when applied to automotive applications. This paper explores the potential of applying this technology to applications in aerospace and vibrating environments in general and presents design recommendations for such an approach to enhance interactions accuracy as well as safety

    Literature Survey on Interaction Techniques for Large Displays

    Get PDF
    When designing for large screen displays, designers are forced to deal with cursor tracking issues, interacting over distances, and space management issues. Because of the large visual angle of the user that the screen can cover, it may be hard for users to begin and complete search tasks for basic items such as cursors or icons. In addition, maneuvering over long distances and acquiring small targets understandably takes more time than the same interactions on normally sized screen systems. To deal with these issues, large display researchers have developed more and more unconventional devices, methods and widgets for interaction, and systems for space and task management. For tracking cursors there are techniques that deal with the size and shape of the cursor, as well as the “density” of the cursor. There are other techniques that help direct the attention of the user to the cursor. For target acquisition on large screens, many researchers saw fit to try to augment existing 2D GUI metaphors. They try to optimize Fitts’ law to accomplish this. Some techniques sought to enlarge targets while others sought to enlarge the cursor itself. Even other techniques developed ways of closing the distances on large screen displays. However, many researchers feel that existing 2D metaphors do not and will not work for large screens. They feel that the community should move to more unconventional devices and metaphors. These unconventional means include use of eye-tracking, laser-pointing, hand-tracking, two-handed touchscreen techniques, and other high-DOF devices. In the end, many of these developed techniques do provide effective means for interaction on large displays. However, we need to quantify the benefits of these methods and understand them better. The more we understand the advantages and disadvantages of these techniques, the easier it will be to employ them in working large screen systems. We also need to put into place a kind of interaction standard for these large screen systems. This could mean simply supporting desktop events such as pointing and clicking. It may also mean that we need to identify the needs of each domain that large screens are used for and tailor the interaction techniques for the domain

    Human–Machine Interface in Transport Systems: An Industrial Overview for More Extended Rail Applications

    Get PDF
    This paper provides an overview of Human Machine Interface (HMI) design and command systems in commercial or experimental operation across transport modes. It presents and comments on different HMIs from the perspective of vehicle automation equipment and simulators of different application domains. Considering the fields of cognition and automation, this investigation highlights human factors and the experiences of different industries according to industrial and literature reviews. Moreover, to better focus the objectives and extend the investigated industrial panorama, the analysis covers the most effective simulators in operation across various transport modes for the training of operators as well as research in the fields of safety and ergonomics. Special focus is given to new technologies that are potentially applicable in future train cabins, e.g., visual displays and haptic-shared controls. Finally, a synthesis of human factors and their limits regarding support for monitoring or driving assistance is propose

    glueTK: A Framework for Multi-modal, Multi-display Interaction

    Get PDF
    This thesis describes glueTK, a framework for human machine interaction, that allows the integration of multiple input modalities and the interaction across different displays. Building upon the framework, several contributions to integrate pointing gestures into interactive systems are presented. To address the design of interfaces for the wide range of supported displays, a concept for transferring interaction performance from one system to another is defined

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe
    • 

    corecore