1,114 research outputs found

    Designing an Adaptive Web Navigation Interface for Users with Variable Pointing Performance

    Get PDF
    Many online services and products require users to point and interact with user interface elements. For individuals who experience variable pointing ability due to physical impairments, environmental issues or age, using an input device (e.g., a computer mouse) to select elements on a website can be difficult. Adaptive user interfaces dynamically change their functionality in response to user behavior. They can support individuals with variable pointing abilities by 1) adapting dynamically to make element selection easier when a user is experiencing pointing difficulties, and 2) informing users about these pointing errors. While adaptive interfaces are increasingly prevalent on the Web, little is known about the preferences and expectations of users with variable pointing abilities and how to design systems that dynamically support them given these preferences. We conducted an investigation with 27 individuals who intermittently experience pointing problems to inform the design of an adaptive interface for web navigation. We used a functional high-fidelity prototype as a probe to gather information about user preferences and expectations. Our participants expected the system to recognize and integrate their preferences for how pointing tasks were carried out, preferred to receive information about system functionality and wanted to be in control of the interaction. We used findings from the study to inform the design of an adaptive Web navigation interface, PINATA that tracks user pointing performance over time and provides dynamic notifications and assistance tailored to their specifications. Our work contributes to a better understanding of users' preferences and expectations of the design of an adaptive pointing system

    Stabilising touch interactions in cockpits, aerospace, and vibrating environments

    Get PDF
    © Springer International Publishing AG, part of Springer Nature 2018. Incorporating touch screen interaction into cockpit flight systems is increasingly gaining traction given its several potential advantages to design as well as usability to pilots. However, perturbations to the user input are prevalent in such environments due to vibrations, turbulence and high accelerations. This poses particular challenges for interacting with displays in the cockpit, for example, accidental activation during turbulence or high levels of distraction from the primary task of airplane control to accomplish selection tasks. On the other hand, predictive displays have emerged as a solution to minimize the effort as well as cognitive, visual and physical workload associated with using in-vehicle displays under perturbations, induced by road and driving conditions. This technology employs gesture tracking in 3D and potentially eye-gaze as well as other sensory data to substantially facilitate the acquisition (pointing and selection) of an interface component by predicting the item the user intents to select on the display, early in the movements towards the screen. A key aspect is utilising principled Bayesian modelling to incorporate and treat the present perturbation, thus, it is a software-based solution that showed promising results when applied to automotive applications. This paper explores the potential of applying this technology to applications in aerospace and vibrating environments in general and presents design recommendations for such an approach to enhance interactions accuracy as well as safety

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Predicting and Reducing the Impact of Errors in Character-Based Text Entry

    Get PDF
    This dissertation focuses on the effect of errors in character-based text entry techniques. The effect of errors is targeted from theoretical, behavioral, and practical standpoints. This document starts with a review of the existing literature. It then presents results of a user study that investigated the effect of different error correction conditions on popular text entry performance metrics. Results showed that the way errors are handled has a significant effect on all frequently used error metrics. The outcomes also provided an understanding of how users notice and correct errors. Building on this, the dissertation then presents a new high-level and method-agnostic model for predicting the cost of error correction with a given text entry technique. Unlike the existing models, it accounts for both human and system factors and is general enough to be used with most character-based techniques. A user study verified the model through measuring the effects of a faulty keyboard on text entry performance. Subsequently, the work then explores the potential user adaptation to a gesture recognizer’s misrecognitions in two user studies. Results revealed that users gradually adapt to misrecognition errors by replacing the erroneous gestures with alternative ones, if available. Also, users adapt to a frequently misrecognized gesture faster if it occurs more frequently than the other error-prone gestures. Finally, this work presents a new hybrid approach to simulate pressure detection on standard touchscreens. The new approach combines the existing touch-point- and time-based methods. Results of two user studies showed that it can simulate pressure detection more reliably for at least two pressure levels: regular (~1 N) and extra (~3 N). Then, a new pressure-based text entry technique is presented that does not require tapping outside the virtual keyboard to reject an incorrect or unwanted prediction. Instead, the technique requires users to apply extra pressure for the tap on the next target key. The performance of the new technique was compared with the conventional technique in a user study. Results showed that for inputting short English phrases with 10% non-dictionary words, the new technique increases entry speed by 9% and decreases error rates by 25%. Also, most users (83%) favor the new technique over the conventional one. Together, the research presented in this dissertation gives more insight into on how errors affect text entry and also presents improved text entry methods

    A computational approach to gestural interactions of the upper limb on planar surfaces

    Get PDF
    There are many compelling reasons for proposing new gestural interactions: one might want to use a novel sensor that affords access to data that couldn’t be previously captured, or transpose a well-known task into a different unexplored scenario. After an initial design phase, the creation, optimisation or understanding of new interactions remains, however, a challenge. Models have been used to foresee interaction properties: Fitts’ law, for example, accurately predicts movement time in pointing and steering tasks. But what happens when no existing models apply? The core assertion to this work is that a computational approach provides frameworks and associated tools that are needed to model such interactions. This is supported through three research projects, in which discriminative models are used to enable interactions, optimisation is included as an integral part of their design and reinforcement learning is used to explore motions users produce in such interactions

    Predicting Continuous Locomotion Modes via Multidimensional Feature Learning from sEMG

    Full text link
    Walking-assistive devices require adaptive control methods to ensure smooth transitions between various modes of locomotion. For this purpose, detecting human locomotion modes (e.g., level walking or stair ascent) in advance is crucial for improving the intelligence and transparency of such robotic systems. This study proposes Deep-STF, a unified end-to-end deep learning model designed for integrated feature extraction in spatial, temporal, and frequency dimensions from surface electromyography (sEMG) signals. Our model enables accurate and robust continuous prediction of nine locomotion modes and 15 transitions at varying prediction time intervals, ranging from 100 to 500 ms. In addition, we introduced the concept of 'stable prediction time' as a distinct metric to quantify prediction efficiency. This term refers to the duration during which consistent and accurate predictions of mode transitions are made, measured from the time of the fifth correct prediction to the occurrence of the critical event leading to the task transition. This distinction between stable prediction time and prediction time is vital as it underscores our focus on the precision and reliability of mode transition predictions. Experimental results showcased Deep-STP's cutting-edge prediction performance across diverse locomotion modes and transitions, relying solely on sEMG data. When forecasting 100 ms ahead, Deep-STF surpassed CNN and other machine learning techniques, achieving an outstanding average prediction accuracy of 96.48%. Even with an extended 500 ms prediction horizon, accuracy only marginally decreased to 93.00%. The averaged stable prediction times for detecting next upcoming transitions spanned from 28.15 to 372.21 ms across the 100-500 ms time advances.Comment: 10 pages,7 figure

    Fall Prediction and Prevention Systems: Recent Trends, Challenges, and Future Research Directions.

    Get PDF
    Fall prediction is a multifaceted problem that involves complex interactions between physiological, behavioral, and environmental factors. Existing fall detection and prediction systems mainly focus on physiological factors such as gait, vision, and cognition, and do not address the multifactorial nature of falls. In addition, these systems lack efficient user interfaces and feedback for preventing future falls. Recent advances in internet of things (IoT) and mobile technologies offer ample opportunities for integrating contextual information about patient behavior and environment along with physiological health data for predicting falls. This article reviews the state-of-the-art in fall detection and prediction systems. It also describes the challenges, limitations, and future directions in the design and implementation of effective fall prediction and prevention systems
    • 

    corecore