19 research outputs found

    Investigating Performance and Usage of Input Methods for Soft Keyboard Hotkeys

    Get PDF
    Touch-based devices, despite their mainstream availability, do not support a unified and efficient command selection mechanism, available on every platform and application. We advocate that hotkeys, conventionally used as a shortcut mechanism on desktop computers, could be generalized as a command selection mechanism for touch-based devices, even for keyboard-less applications. In this paper, we investigate the performance and usage of soft keyboard shortcuts or hotkeys (abbreviated SoftCuts) through two studies comparing different input methods across sitting, standing and walking conditions. Our results suggest that SoftCuts not only are appreciated by participants but also support rapid command selection with different devices and hand configurations. We also did not find evidence that walking deters their performance when using the Once input method.Comment: 17+2 pages, published at Mobile HCI 202

    MenuCraft: Interactive Menu System Design with Large Language Models

    Full text link
    Menu system design is a challenging task involving many design options and various human factors. For example, one crucial factor that designers need to consider is the semantic and systematic relation of menu commands. However, capturing these relations can be challenging due to limited available resources. With the advancement of neural language models, large language models can utilize their vast pre-existing knowledge in designing and refining menu systems. In this paper, we propose MenuCraft, an AI-assisted designer for menu design that enables collaboration between the designer and a dialogue system to design menus. MenuCraft offers an interactive language-based menu design tool that simplifies the menu design process and enables easy customization of design options. MenuCraft supports a variety of interactions through dialog that allows performing zero/few-shot learning

    Legitimizing video-sharing practices on local and global platforms:A multimodal analysis of menu design, folk genres and taxonomy

    Get PDF
    There have been extensive public and academic debates on the role platform algorithms play in shaping social media (sub)cultures. Little attention, however, has been paid to how platform (sub)cultures are discursively constructed by the design of the platform interface. This study examines Bilibili, a leading Chinese video platform, and investigates how it discursively frames video-sharing culture through platform menu design. We developed a three-level analytical framework that includes: 1) a multimodal social semiotic analysis of Bilibili’s menu design; 2) a contrastive analysis of YouTube’s video menu, and 3) a focused analysis of guichu or kichiku videos (as a linguistic phenomenon, a transcultural practice, and a multimodal semiotic artifact). Our findings reveal that Bilibili discursively frames and legitimizes video-sharing practices by establishing a folk taxonomy of video genres and integrating subculture into its menu design. Furthermore, Bilibili controls access to cultural knowledge through explicit (gatekeeping) and implicit (semiotic) measures, in contrast to YouTube’s visual and superficial taxonomy. This study unveils different discursive strategies platforms use to shape unique online video cultures

    MARLUI: Multi-Agent Reinforcement Learning for Adaptive UIs

    Full text link
    Adaptive user interfaces (UIs) automatically change an interface to better support users' tasks. Recently, machine learning techniques have enabled the transition to more powerful and complex adaptive UIs. However, a core challenge for adaptive user interfaces is the reliance on high-quality user data that has to be collected offline for each task. We formulate UI adaptation as a multi-agent reinforcement learning problem to overcome this challenge. In our formulation, a user agent mimics a real user and learns to interact with a UI. Simultaneously, an interface agent learns UI adaptations to maximize the user agent's performance. The interface agent learns the task structure from the user agent's behavior and, based on that, can support the user agent in completing its task. Our method produces adaptation policies that are learned in simulation only and, therefore, does not need real user data. Our experiments show that learned policies generalize to real users and achieve on par performance with data-driven supervised learning baselines

    Optimization Approaches to Adaptive Menus

    Get PDF
    Graphical menus perform as vital components and offer essential controls in today’s graphical interface. However, few studies have been conducted to modelling the performance of a menu. Furthermore, menu optimization methods previously proposed have been largely concentrating on reshaping layout of the whole menu system. In order to model menu performance, this thesis extends the Search-Decision-Pointing model by introducing two additional factors, i.e. the cost function and semantic function. The cost function is a penalty function which decreases the user expertise regarding a menu layout according to the degree of modification done to the menu. The semantic function is a reward function which encourages items with strong relations be positioned close to each other. Centered on this menu performance model, several optimization methods have been implemented. Each method focuses on improving menu performance by applying distinctive strategies, such as increasing item size or reducing item pointing distance. Three test cases have been exercised to evaluate the optimization methods in a simulated software which displays graphical user interfaces and emulates the menu utilization of real users. The results of test cases reveal that the menu performance has been successfully improved in all test cases by the fundamental heuristic search algorithm. Moreover, other optimization methods have been able to further increase menu performance ranging from 3% to 8% depending on test cases. In addition, it is identified that increasing the size of an item offers surprisingly little benefit. Conversely, reducing item pointing distance has greatly improved menu performance. Moreover, positioning items by their semantic relations may also enhance group saliency. On the other hand, optimization methods may not always succeed in providing usable menus due to design constraints. Hence, menu performance optimization shall be carefully exercised by considering the entire graphical user interface

    Investigating Performance and Usage of Input Methods for Soft Keyboard Hotkeys

    Get PDF
    International audienceTouch-based devices, despite their mainstream availability, do not support a unified and efficient command selection mechanism, available on every platform and application. We advocate that hotkeys, conventionally used as a shortcut mechanism on desktop computers, could be generalized as a command selection mechanism for touch-based devices, even for keyboard-less applications. In this paper, we investigate the performance and usage of soft keyboard shortcuts or hotkeys (abbreviated SoftCuts) through two studies comparing different input methods across sitting, standing and walking conditions. Our results suggest that SoftCuts not only are appreciated by participants but also support rapid command selection with different devices and hand configurations. We also did not find evidence that walking deters their performance when using the Once input method

    TouchEditor: Interaction design and evaluation of a flexible touchpad for text editing of head-mounted displays in speech-unfriendly environments

    Get PDF
    A text editing solution that adapts to speech-unfriendly (inconvenient to speak or difficult to recognize speech) environments is essential for head-mounted displays (HMDs) to work universally. For existing schemes, e.g., touch bar, virtual keyboard and physical keyboard, there are shortcomings such as insufficient speed, uncomfortable experience or restrictions on user location and posture. To mitigate these restrictions, we propose TouchEditor, a novel text editing system for HMDs based on a flexible piezoresistive film sensor, supporting cursor positioning, text selection, text retyping and editing commands (i.e., Copy, Paste, Delete, etc.). Through literature overview and heuristic study, we design a pressure-controlled menu and a shortcut gesture set for entering editing commands, and propose an area-and-pressure-based method for cursor positioning and text selection that skillfully maps gestures in different areas and with different strengths to cursor movements with different directions and granularities. The evaluation results show that TouchEditor i) adapts to various contents and scenes well with a stable correction speed of 0.075 corrections per second; ii) achieves 95.4% gesture recognition accuracy; iii) reaches a considerable level with a mobile phone in text selection tasks. The comparison results with the speech-dependent EYEditor and the built-in touch bar further prove the flexibility and robustness of TouchEditor in speech-unfriendly environments
    corecore