35 research outputs found

    Latency guidelines for touchscreen virtual button feedback

    Get PDF
    Touchscreens are very widely used, especially in mobile phones. They feature many interaction methods, pressing a virtual button being one of the most popular ones. In addition to an inherent visual feedback, virtual button can provide audio and tactile feedback. Since mobile phones are essentially computers, the processing causes latencies in interaction. However, it has not been known, if the latency is an issue in mobile touchscreen virtual button interaction, and what the latency recommendations for visual, audio and tactile feedback are. The research in this thesis has investigated multimodal latency in mobile touchscreen virtual button interaction. For the first time, an affordable, but accurate tool was built to measure all three feedback latencies in touchscreens. For the first time, simultaneity perception of touch and feedback, as well as the effect of latency on virtual button perceived quality has been studied and thresholds found for both unimodal and bimodal feedback. The results from these studies were combined as latency guidelines for the first time. These guidelines enable interaction designers to establish requirements for mobile phone engineers to optimise the latencies on the right level. The latency measurement tool consisted of a high-speed camera, a microphone and an accelerometer for visual, audio and tactile feedback measurements. It was built with off-the-shelf components and, in addition, it was portable. Therefore, it could be copied at low cost or moved wherever needed. The tool enables touchscreen interaction designers to validate latencies in their experiments, making their results more valuable and accurate. The tool could benefit the touchscreen phone manufacturers, since it enables engineers to validate latencies during development of mobile phones. The tool has been used in mobile phone R&D within Nokia Corporation and for validation of a research device within the University of Glasgow. The guidelines established for unimodal feedback was as follows: visual feedback latency should be between 30 and 85 ms, audio between 20 and 70 ms and tactile between 5 and 50 ms. The guidelines were found to be different for bimodal feedback: visual feedback latency should be 95 and audio 70 ms when the feedback was visual-audio, visual 100 and tactile 55 ms when the feedback was visual-tactile and tactile 25 and audio 100 ms when the feedback was tactile-audio. These guidelines will help engineers and interaction designers to select and optimise latencies to be low enough, but not too low. Designers using these guidelines will make sure that most of the users will both perceive the feedback as simultaneous with their touch and experience high quality virtual buttons. The results from this thesis show that latency has a remarkable effect on touchscreen virtual buttons, and it is a key part of virtual button feedback design. The novel results enable researchers, designers and engineers to master the effect of latencies in research and development. This will lead to more accurate and reliable research results and help mobile phone manufacturers make better products

    The Motion-Lab: A Virtual Reality Laboratory for Spatial Updating Experiments

    Get PDF
    The main question addressed in the Motion-Lab is: How do we know where we are? Normally, humans know where they are with respect to the immediate surround. The overall perception of this environment results from the integration of multiple sensory modalities. Here we use Virtual Reality to study the interaction of visual, vestibular, and proprioceptive senses and explore the way these senses might be integrated into a coherent perception of spatial orientation and location. This Technical Report describes a Virtual Reality laboratory, its technical implementation as a distributed network of computers and discusses its usability for experiments designed to investigate questions of spatial orientation

    Multilayer haptic feedback for pen-based tablet interaction

    Get PDF
    We present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance

    Multimodal Human-Machine Interface For Haptic-Controlled Excavators

    Get PDF
    The goal of this research is to develop a human-excavator interface for the hapticcontrolled excavator that makes use of the multiple human sensing modalities (visual, auditory haptic), and efficiently integrates these modalities to ensure intuitive, efficient interface that is easy to learn and use, and is responsive to operator commands. Two empirical studies were conducted to investigate conflict in the haptic-controlled excavator interface and identify the level of force feedback for best operator performance

    Enhancing the use of Haptic Devices in Education and Entertainment

    Get PDF
    This research was part of the two-years Horizon 2020 European Project "weDRAW". The aim of the project was that "specific sensory systems have specific roles to learn specific concepts". This work explores the use of the haptic modality, stimulated by the means of force-feedback devices, to convey abstract concepts inside virtual reality. After a review of the current use of haptic devices in education, available haptic software and game engines, we focus on the implementation of an haptic plugin for game engines (HPGE, based on state of the art rendering library CHAI3D) and its evaluation in human perception experiments and multisensory integration

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope
    corecore