486 research outputs found

    Exploring user-defined gestures for alternate interaction space for smartphones and smartwatches

    Get PDF
    2016 Spring.Includes bibliographical references.In smartphones and smartwatches, the input space is limited due to their small form factor. Although many studies have highlighted the possibility of expanding the interaction space for these devices, limited work has been conducted on exploring end-user preferences for gestures in the proposed interaction spaces. In this dissertation, I present the results of two elicitation studies that explore end-user preferences for creating gestures in the proposed alternate interaction spaces for smartphones and smartwatches. Using the data collected from the two elicitation studies, I present gestures preferred by end-users for common tasks that can be performed using smartphones and smartwatches. I also present the end-user mental models for interaction in proposed interaction spaces for these devices, and highlight common user motivations and preferences for suggested gestures. Based on the findings, I present design implications for incorporating the proposed alternate interaction spaces for smartphones and smartwatches

    Creating mobile gesture-based interaction design patterns for older adults : a study of tap and swipe gestures with portuguese seniors

    Get PDF
    Tese de mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 201

    Neuromotor Control of the Hand During Smartphone Manipulation

    Get PDF
    The primary focus of this dissertation was to understand the motor control strategy used by our neuromuscular system for the multi-layered motor tasks involved during smartphone manipulation. To understand this control strategy, we recorded the kinematics and multi-muscle activation pattern of the right limb during smartphone manipulation, including grasping with/out tapping, movement conditions (MCOND), and arm heights. In the first study (chapter 2), we examined the neuromuscular control strategy of the upper limb during grasping with/out tapping executed with a smartphone by evaluating muscle-activation patterns of the upper limb during different movement conditions (MCOND). There was a change in muscle activity for MCOND and segments. We concluded that our neuromuscular system generates the motor strategy that would allow smartphone manipulation involving grasping and tapping while maintaining MCOND by generating continuous and distinct multi-muscle activation patterns in the upper limb muscles. In the second study (chapter 3), we examined the muscle activity of the upper limb when the smartphone was manipulated at two arm heights: shoulder and abdomen to understand the influence of the arm height on the neuromuscular control strategy of the upper limb. Some muscles showed a significant effect for ABD, while some muscle showed a significant effect for SHD. We concluded that the motor control strategy was influenced by the arm height as there were changes in the shoulder and elbow joint angles along with the muscular activity of the upper limb. Further, shoulder position helped in holding the head upright while abdomen reduced the moment arm and moment and ultimately, muscle loading compared to the shoulder. Overall, our neuromuscular system generates motor command by activating a multi-muscle activation pattern in the upper limb, which would be dependent upon the task demands such as grasping with/out tapping, MCOND, and arm heights. Similarly, our neuromuscular system does not appear to increase muscle activation when there is a combined effect of MCOND and arm heights. Instead, it utilizes a simple control strategy that would select an appropriate muscle and activate them based on the levels of MCOND and arm heights

    Accessible On-Body Interaction for People With Visual Impairments

    Get PDF
    While mobile devices offer new opportunities to gain independence in everyday activities for people with disabilities, modern touchscreen-based interfaces can present accessibility challenges for low vision and blind users. Even with state-of-the-art screenreaders, it can be difficult or time-consuming to select specific items without visual feedback. The smooth surface of the touchscreen provides little tactile feedback compared to physical button-based phones. Furthermore, in a mobile context, hand-held devices present additional accessibility issues when both of the users’ hands are not available for interaction (e.g., on hand may be holding a cane or a dog leash). To improve mobile accessibility for people with visual impairments, I investigate on-body interaction, which employs the user’s own skin surface as the input space. On-body interaction may offer an alternative or complementary means of mobile interaction for people with visual impairments by enabling non-visual interaction with extra tactile and proprioceptive feedback compared to a touchscreen. In addition, on-body input may free users’ hands and offer efficient interaction as it can eliminate the need to pull out or hold the device. Despite this potential, little work has investigated the accessibility of on-body interaction for people with visual impairments. Thus, I begin by identifying needs and preferences of accessible on-body interaction. From there, I evaluate user performance in target acquisition and shape drawing tasks on the hand compared to on a touchscreen. Building on these studies, I focus on the design, implementation, and evaluation of an accessible on-body interaction system for visually impaired users. The contributions of this dissertation are: (1) identification of perceived advantages and limitations of on-body input compared to a touchscreen phone, (2) empirical evidence of the performance benefits of on-body input over touchscreen input in terms of speed and accuracy, (3) implementation and evaluation of an on-body gesture recognizer using finger- and wrist-mounted sensors, and (4) design implications for accessible non-visual on-body interaction for people with visual impairments

    Investigating How Smartphone Movement is Affected by Body Posture

    Get PDF

    Press-n-Paste : Copy-and-Paste Operations with Pressure-sensitive Caret Navigation for Miniaturized Surface in Mobile Augmented Reality

    Get PDF
    Publisher Copyright: © 2021 ACM.Copy-and-paste operations are the most popular features on computing devices such as desktop computers, smartphones and tablets. However, the copy-and-paste operations are not sufficiently addressed on the Augmented Reality (AR) smartglasses designated for real-time interaction with texts in physical environments. This paper proposes two system solutions, namely Granularity Scrolling (GS) and Two Ends (TE), for the copy-and-paste operations on AR smartglasses. By leveraging a thumb-size button on a touch-sensitive and pressure-sensitive surface, both the multi-step solutions can capture the target texts through indirect manipulation and subsequently enables the copy-and-paste operations. Based on the system solutions, we implemented an experimental prototype named Press-n-Paste (PnP). After the eight-session evaluation capturing 1,296 copy-and-paste operations, 18 participants with GS and TE achieve the peak performance of 17,574 ms and 13,951 ms per copy-and-paste operation, with 93.21% and 98.15% accuracy rates respectively, which are as good as the commercial solutions using direct manipulation on touchscreen devices. The user footprints also show that PnP has a distinctive feature of miniaturized interaction area within 12.65 mm∗14.48 mm. PnP not only proves the feasibility of copy-and-paste operations with the flexibility of various granularities on AR smartglasses, but also gives significant implications to the design space of pressure widgets as well as the input design on smart wearables.Peer reviewe

    Finger orientation as an additional input dimension for touchscreens

    Get PDF
    Since the first digital computer in 1941 and the first personal computer back in 1975, the way we interact with computers has radically changed. The keyboard is still one of the two main input devices for desktop computers which is accompanied most of the time by a mouse or trackpad. However, the interaction with desktop and laptop computers today only make up a small percentage of current interaction with computing devices. Today, we mostly interact with ubiquitous computing devices, and while the first ubiquitous devices were controlled via buttons, this changed with the invention of touchscreens. Moreover, the phone as the most prominent ubiquitous computing device is heavily relying on touch interaction as the dominant input mode. Through direct touch, users can directly interact with graphical user interfaces (GUIs). GUI controls can directly be manipulated by simply touching them. However, current touch devices reduce the richness of touch input to two-dimensional positions on the screen. In this thesis, we investigate the potential of enriching a simple touch with additional information about the finger touching the screen. We propose to use the user’s finger orientation as two additional input dimensions. We investigate four key areas which make up the foundation to fully understand finger orientation as an additional input technique. With these insights, we provide designers with the foundation to design new gestures sets and use cases which take the finger orientation into account. We first investigate approaches to recognize finger orientation input and provide ready-to-deploy models to recognize the orientation. Second, we present design guidelines for a comfortable use of finger orientation. Third, we present a method to analyze applications in social settings to design use cases with possible conversation disruption in mind. Lastly, we present three ways how new interaction techniques like finger orientation input can be communicated to the user. This thesis contributes these four key insights to fully understand finger orientation as an additional input technique. Moreover, we combine the key insights to lay the foundation to evaluate every new interaction technique based on the same in-depth evaluation

    Breaking the Screen: Interaction Across Touchscreen Boundaries in Virtual Reality for Mobile Knowledge Workers.

    Get PDF
    Virtual Reality (VR) has the potential to transform knowledge work. One advantage of VR knowledge work is that it allows extending 2D displays into the third dimension, enabling new operations, such as selecting overlapping objects or displaying additional layers of information. On the other hand, mobile knowledge workers often work on established mobile devices, such as tablets, limiting interaction with those devices to a small input space. This challenge of a constrained input space is intensified in situations when VR knowledge work is situated in cramped environments, such as airplanes and touchdown spaces. In this paper, we investigate the feasibility of interacting jointly between an immersive VR head-mounted display and a tablet within the context of knowledge work. Specifically, we 1) design, implement and study how to interact with information that reaches beyond a single physical touchscreen in VR; 2) design and evaluate a set of interaction concepts; and 3) build example applications and gather user feedback on those applications.Comment: 10 pages, 8 figures, ISMAR 202
    • …
    corecore