24 research outputs found

    A Framework for Computational Design and Adaptation of Extended Reality User Interfaces

    Full text link
    To facilitate high quality interaction during the regular use of computing systems, it is essential that the user interface (UI) deliver content and components in an appropriate manner. Although extended reality (XR) is emerging as a new computing platform, we still have a limited understanding of how best to design and present interactive content to users in such immersive environments. Adaptive UIs offer a promising approach for optimal presentation in XR as the user's environment, tasks, capabilities, and preferences vary under changing context. In this position paper, we present a design framework for adapting various characteristics of content presented in XR. We frame these as five considerations that need to be taken into account for adaptive XR UIs: What?, How Much?, Where?, How?, and When?. With this framework, we review literature on UI design and adaptation to reflect on approaches that have been adopted or developed in the past towards identifying current gaps and challenges, and opportunities for applying such approaches in XR. Using our framework, future work could identify and develop novel computational approaches for achieving successful adaptive user interfaces in such immersive environments.Comment: 5 pages, CHI 2023 Workshop on The Future of Computational Approaches for Understanding and Adapting User Interface

    Computational Adaptation of XR Interfaces Through Interaction Simulation

    Full text link
    Adaptive and intelligent user interfaces have been proposed as a critical component of a successful extended reality (XR) system. In particular, a predictive system can make inferences about a user and provide them with task-relevant recommendations or adaptations. However, we believe such adaptive interfaces should carefully consider the overall \emph{cost} of interactions to better address uncertainty of predictions. In this position paper, we discuss a computational approach to adapt XR interfaces, with the goal of improving user experience and performance. Our novel model, applied to menu selection tasks, simulates user interactions by considering both cognitive and motor costs. In contrast to greedy algorithms that adapt based on predictions alone, our model holistically accounts for costs and benefits of adaptations towards adapting the interface and providing optimal recommendations to the user.Comment: 5 pages, 1 figure, 1 table. CHI 2022 Workshop on Computational Approaches for Understanding, Generating, and Adapting User Interface

    XAIR: A Framework of Explainable AI in Augmented Reality

    Full text link
    Explainable AI (XAI) has established itself as an important component of AI-driven interactive systems. With Augmented Reality (AR) becoming more integrated in daily lives, the role of XAI also becomes essential in AR because end-users will frequently interact with intelligent services. However, it is unclear how to design effective XAI experiences for AR. We propose XAIR, a design framework that addresses "when", "what", and "how" to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR. XAIR's utility and effectiveness was verified via a study with 10 designers and another study with 12 end-users. XAIR can provide guidelines for designers, inspiring them to identify new design opportunities and achieve effective XAI designs in AR.Comment: Proceedings of the 2023 CHI Conference on Human Factors in Computing System

    Suit Up!: Enabling Eyes-Free Interactions on Jacket Buttons

    No full text
    Abstract We present a new interaction space for wearables by integrating interactive elements, in the form of buttons, into outdoor clothing, specifically jackets and coats. Interactive buttons, or "iButtons", allow users to perform specific tasks using subtle, inconspicuous gestures. They are intended for outdoor settings, where reaching for a mobile phone or an other device may not be convenient or appropriate. Different types of buttons serve dedicated functions, and appropriate placement of these buttons make them easily accessible, without requiring visual contact. By adding context sensitivity, these buttons can also be repurposed to fit other functions. By linking multiple buttons, it is possible to create workflows for specific tasks. We provide a description of an initial iButton design space and highlight some scenarios to illustrate the envisioned usage of interactive buttons

    Understanding finger input above desktop devices

    Get PDF
    Using the space above desktop input devices adds a rich new input channel to desktop interaction. Input in this elevated layer has been previously used to modify the granularity of a 2D slider, navigate layers of a 3D body scan above a mul-titouch table and access vertically stacked menus. However, designing these interactions is challenging because the lack of haptic and direct visual feedback easily leads to input er-rors. For bare finger input, the user’s fingers needs to reliably enter and stay inside the interactive layer, and engagement techniques such as midair clicking have to be disambiguated from leaving the layer. These issues have been addressed for interactions in which users operate other devices in midair, but there is little guidance for the design of bare finger input in this space. In this paper, we present the results of two user studies that inform the design of finger input above desktop devices. Our studies show that 2 cm is the minimum thickness of the above-surface volume that users can reliably remain within. We found that when accessing midair layers, users do not au-tomatically move to the same height. To address this, we in-troduce a technique that dynamically determines the height at which the layer is placed, depending on the velocity pro-file of the user’s initial finger movement into midair. Finally, we propose a technique that reliably distinguishes clicking from homing movements, based on the user’s hand shape. We structure the presentation of our findings using Buxton’s three-state input model, adding additional states and transi-tions for above-surface interactions

    Sketchplorer

    No full text
    This workshop paper discusses our mixed-initiative approach that enables designers to rapidly sketch and explore interactive layout designs. Although optimisation methods can attack very complex design problems, their insistence on precise objectives and a point optimum is a poor fit with sketching practices. Typical optimisation tools also fail to incorporate the human in the loop. Sketchplorer is a mixed-initiative sketching tool that uses a real-time layout optimiser. It automatically infers the designer's task to search for both local improvements to the current design and global (radical) alternatives. Using predictive models of sensorimotor performance and perception, it generates suggestions that interactively steer the designer towards more usable and aesthetic layouts without overriding them or demanding extensive input. While this position paper summarises our work from the mixed-initiative perspective, further details can be found in the original publication [4].Peer reviewe

    An Interactive Design Space for Wearable Displays

    No full text
    Funding Information: This work was partly funded by the Department of Communications and Networking (Comnet, Aalto University), and Academy of Finland projects ‘Human Automata’ and ‘BAD’. This research was partially funded by Flanders Make, the strategic research centre for the manufacturing industry, through its projects ‘SmartHandler’ and ‘Ergo-EyeHand’. Publisher Copyright: © 2021 ACM.The promise of on-body interactions has led to widespread development of wearable displays. They manifest themselves in highly variable shapes and form, and are realized using technologies with fundamentally different properties. Through an extensive survey of the field of wearable displays, we characterize existing systems based on key qualities of displays and wearables, such as location on the body, intended viewers or audience, and the information density of rendered content. We present the results of this analysis in an open, web-based interactive design space that supports exploration and refinement along various parameters. The design space, which currently encapsulates 129 cases of wearable displays, aims to inform researchers and practitioners on existing solutions and designs, and enable the identification of gaps and opportunities for novel research and applications. Further, it seeks to provide them with a thinking tool to deliberate on how the displayed content should be adapted based on key design parameters. Through this work, we aim to facilitate progress in wearable displays, informed by existing solutions, by providing researchers with an interactive platform for discovery and reflection.Peer reviewe

    SAM: A Modular Framework for Self-Adapting Web Menus

    No full text
    | openaire: EC/H2020/637991/EU//COMPUTEDThis paper presents SAM, a modular and extensible JavaScript framework for self-adapting menus on webpages. SAM allows control of two elementary aspects for adapting web menus: (1) the target policy, which assigns scores to menu items for adaptation, and (2) the adaptation style, which specifies how they are adapted on display. By decoupling them, SAM enables the exploration of different combinations independently. Several policies from literature are readily implemented, and paired with adaptation styles such as reordering and highlighting. The process—including user data logging—is local, offering privacy benefits and eliminating the need for server-side modifications. Researchers can use SAM to experiment adaptation policies and styles, and benchmark techniques in an ecological setting with real webpages. Practitioners can make websites self-adapting, and end-users can dynamically personalise typically static web menus.Peer reviewe
    corecore