3,870 research outputs found

    Comparing Evaluation Methods for Encumbrance and Walking on Interaction with Touchscreen Mobile Devices

    Get PDF
    In this paper, two walking evaluation methods were compared to evaluate the effects of encumbrance while the preferred walking speed (PWS) is controlled. Users frequently carry cumbersome objects (e.g. shopping bags) and use mobile devices at the same time which can cause interaction difficulties and erroneous input. The two methods used to control the PWS were: walking on a treadmill and walking around a predefined route on the ground while following a pacesetter. The results from our target acquisition experiment showed that for ground walking at 100% of PWS, accuracy dropped to 36% when carrying a bag in the dominant hand while accuracy reduced to 34% for holding a box under the dominant arm. We also discuss the advantages and limitations of each evaluation method when examining encumbrance and suggest treadmill walking is not the most suitable approach to use if walking speed is an important factor in future mobile studies

    User experience, performance, and social acceptability: usable multimodal mobile interaction

    Get PDF
    This thesis explores the social acceptability of multimodal interaction in public places with respect to acceptance, adoption and appropriation. Previous work in multimodal interaction has mainly focused on recognition and detection issues without thoroughly considering the willingness of users to adopt these kinds of interactions in their everyday lives. This thesis presents a novel approach to user experience that is theoretically motivated by phenomenology, practiced with mixed-methods, and analysed based on dramaturgical metaphors. In order to explore the acceptance of multimodal interfaces, this thesis presents three studies that look at users’ initial reactions to multimodal interaction techniques: a survey study focusing on gestures, an on-the-street user study, and a follow-up survey study looking at gesture and voice-based interaction. The investigation of multimodal interaction adoption is explored through two studies: an in situ user study of a performative interface and a focus group study using experience prototypes. This thesis explores the appropriation of multimodal interaction by demonstrating the complete design process of a multimodal interface using the performative approach to user experience presented in this thesis. Chapter 3 looks at users’ initial reactions to and acceptance of multimodal interactions. The results of the first survey explored location and audience as factors the influence how individuals behave in public places. Participants in the on-the-street study described the desirable visual aspects of the gestures as playful, cool, or embarrassing aspects of interaction and how gestures could be hidden as everyday actions. These results begin to explain why users accepted or rejected the gestures from the first survey. The second survey demonstrated that the presence of familiar spectators made interaction significantly more acceptable. This result indicates that performative interaction could be made more acceptable by interfaces that support collaborative or social interaction. Chapter 4 explores how users place interactions into a usability context for use in real world settings. In the first user study, participants took advantage of the wide variety of possible performances, and created a wide variety of input, from highly performative to hidden actions, based on location. The ability of this interface to support flexible interactions allowed users to demonstrate the the purposed of their actions differently based on the immediately co-located spectators. Participants in the focus group study discussed how they would go about placing multimodal interactions into real world contexts, using three approaches: relationship to the device, personal meaning, and relationship to functionality. These results demonstrate how users view interaction within a usability context and how that might affect social acceptability. Chapter 5 examines appropriation of multimodal interaction through the completion of an entire design process. The results of an initial survey were used as a baseline of comparison from which to design the following focus group study. Participants in the focus groups had similar motives for accepting multimodal interactions, although the ways in which these were expressed resulted in very different preferences. The desire to use technology in a comfortable and satisfying way meant different things in these different settings. During the ‘in the wild’ user study, participants adapted performance in order to make interaction acceptable in different contexts. In some cases, performance was hidden in public places or shared with familiar spectators in order to successfully incorporate interaction into public places

    Smartphone Based 3D Navigation Techniques in an Astronomical Observatory Context: Implementation and Evaluation in a Software Platform

    Get PDF
    International audience3D Virtual Environments (3DVE) come up as a good solution to transmit knowledge in a museum exhibit. In such contexts, providing easy to learn and to use interaction techniques which facilitate the handling inside a 3DVE is crucial to maximize the knowledge transfer. We took the opportunity to design and implement a software platform for explaining the behavior of the Telescope Bernard-Lyot to museum visitors on top of the Pic du Midi. Beyond the popularization of a complex scientific equipment, this platform constitutes an open software environment to easily plug different 3D interaction techniques. Recently, popular use of a smartphones as personal handled computer lets us envision the use of a mobile device as an interaction support with these 3DVE. Accordingly, we design and propose how to use the smartphone as a tangible object to navigate inside a 3DVE. In order to prove the interest in the use of smartphones, we compare our solution with available solutions: keyboard-mouse and 3D mouse. User experiments confirmed our hypothesis and particularly emphasizes that visitors find our solution more attractive and stimulating. Finally, we illustrate the benefits of our software framework by plugging alternative interaction techniques for supporting selection and manipulation task in 3D

    Exploring the use of hand-to-face input for interacting with head-worn displays

    Get PDF
    International audienceWe propose the use of Hand-to-Face input, a method to interact with head-worn displays (HWDs) that involves contact with the face. We explore Hand-to-Face interaction to find suitable techniques for common mobile tasks. We evaluate this form of interaction with document navigation tasks and examine its social acceptability. In a first study, users identify the cheek and forehead as predominant areas for interaction and agree on gestures for tasks involving continuous input, such as document navigation. These results guide the design of several Hand-to-Face navigation techniques and reveal that gestures performed on the cheek are more efficient and less tiring than interactions directly on the HWD. Initial results on the social acceptability of Hand-to-Face input allow us to further refine our design choices, and reveal unforeseen results: some gestures are considered culturally inappropriate and gender plays a role in selection of specific Hand-to-Face interactions. From our overall results, we provide a set of guidelines for developing effective Hand-to-Face interaction techniques

    Kinematically-Decoupled Impedance Control for Fast Object Visual Servoing and Grasping on Quadruped Manipulators

    Full text link
    We propose a control pipeline for SAG (Searching, Approaching, and Grasping) of objects, based on a decoupled arm kinematic chain and impedance control, which integrates image-based visual servoing (IBVS). The kinematic decoupling allows for fast end-effector motions and recovery that leads to robust visual servoing. The whole approach and pipeline can be generalized for any mobile platform (wheeled or tracked vehicles), but is most suitable for dynamically moving quadruped manipulators thanks to their reactivity against disturbances. The compliance of the impedance controller makes the robot safer for interactions with humans and the environment. We demonstrate the performance and robustness of the proposed approach with various experiments on our 140 kg HyQReal quadruped robot equipped with a 7-DoF manipulator arm. The experiments consider dynamic locomotion, tracking under external disturbances, and fast motions of the target object.Comment: Accepted as contributed paper at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023

    LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint

    Get PDF
    In this paper we explore the design space of actuated curve interfaces, a novel class of shape changing-interfaces. Physical curves have several interesting characteristics from the perspective of interaction design: they have a variety of inherent affordances; they can easily represent abstract data; and they can act as constraints, boundaries, or borderlines. By utilizing such aspects of lines and curves, together with the added capability of shape-change, new possibilities for display, interaction and body constraint are possible. In order to investigate these possibilities we have implemented two actuated curve interfaces at different scales. LineFORM, our implementation, inspired by serpentine robotics, is comprised of a series chain of 1DOF servo motors with integrated sensors for direct manipulation. To motivate this work we present various applications such as shape changing cords, mobiles, body constraints, and data manipulation tools
    • 

    corecore