28 research outputs found

    Design of a serious game for learning vibrotactile messages

    Get PDF
    To prevent accidental falls, we have designed an augmented shoe aiming at assisting a user when walking. For this, the risk level (low, medium, high and very high) represented by the current situation is conveyed to the user through vibrotactile messages. In this paper, we describe the design of a serious game dedicated to learning of these signals. The game is centered on a virtual maze, whose parts are associated with the four risk levels. To explore this maze, fitted with a pair of the augmented shoes, the user is invited to walk in a room, completely empty, whose dimensions are mapped to those of the virtual maze. When moving, for each area explored the corresponding signal is delivered to the user through the augmented shoes. An initial experiment confirmed the idea that vibrotactile messages can serve for communicating the level of risk

    Vibed: a prototyping tool for haptic game interfaces

    Get PDF
    Haptics in the form of vibrations in game interfaces have the potential to strengthen visual and audio components, and also improve accessibility for certain populations like people with deafblindness. However, building vibrotactile game interfaces is difficult and time consuming. Our research problem was how to make a prototyping tool that facilitated prototyping of vibrotactile game interfaces for phones and gamepads. The results include a description of the prototyping tool we built, which is called VibEd. It allows designers to draw vibrotactile patterns, referred to as vibes, that can easily be tested on phones and gamepads, and exported to code that can be used in game development. It is concluded, based on user tests, that a haptic game interface prototyping tool such as VibEd, can facilitate haptic game interface design and development, and by that contribute to game accessibility for persons with deafblindness

    The Effect of Vibrotactile Feedback on Remote Manual Task Performance

    Get PDF
    Vibrotactile feedback offers a unique opportunity to augment or reconstruct impaired tactile sensations, whether that be in the form of enhancing prosthetics or specialized protective clothing. Important information about temperature and object slippage serve to endanger the human operator or equipment. This thesis presents three experiments which investigate amplitude modulated vibrotactile signals as a scalar dimension of roughness, the effect those signals and their locations (finger pad, forearm, bicep) have on the performance of two tasks: the sensing of temperatures simulated by vibrotactile signals and gripping an object of simulated surface texture. The results show task performance increase when the feedback and site of action are co-located for sensory tasks and decrease for manipulatory tasks

    Mapping information to audio and tactile icons

    Full text link
    We report the results of a study focusing on the meanings that can be conveyed by audio and tactile icons. Our research considers the following question: how can audio and tactile icons be designed to optimise congruence between crossmodal feedback and the type of information this feedback is intended to convey? For example, if we have a set of system warnings, confirmations, progress up-dates and errors: what audio and tactile representations best match the information or type of message? Is one modality more appropriate at presenting certain types of information than the other modality? The results of this study indicate that certain parameters of the audio and tactile modalities such as rhythm, texture and tempo play an important role in the creation of congruent sets of feedback when given a specific type of information to transmit. We argue that a combination of audio or tactile parameters derived from our results allows the same type of information to be derived through touch and sound with an intuitive match to the content of the message

    Augmenting the Spatial Perception Capabilities of Users Who Are Blind

    Get PDF
    People who are blind face a series of challenges and limitations resulting from their lack of being able to see, forcing them to either seek the assistance of a sighted individual or work around the challenge by way of a inefficient adaptation (e.g. following the walls in a room in order to reach a door rather than walking in a straight line to the door). These challenges are directly related to blind users' lack of the spatial perception capabilities normally provided by the human vision system. In order to overcome these spatial perception related challenges, modern technologies can be used to convey spatial perception data through sensory substitution interfaces. This work is the culmination of several projects which address varying spatial perception problems for blind users. First we consider the development of non-visual natural user interfaces for interacting with large displays. This work explores the haptic interaction space in order to find useful and efficient haptic encodings for the spatial layout of items on large displays. Multiple interaction techniques are presented which build on prior research (Folmer et al. 2012), and the efficiency and usability of the most efficient of these encodings is evaluated with blind children. Next we evaluate the use of wearable technology in aiding navigation of blind individuals through large open spaces lacking tactile landmarks used during traditional white cane navigation. We explore the design of a computer vision application with an unobtrusive aural interface to minimize veering of the user while crossing a large open space. Together, these projects represent an exploration into the use of modern technology in augmenting the spatial perception capabilities of blind users

    Supporting Eyes-Free Human–Computer Interaction with Vibrotactile Haptification

    Get PDF
    The sense of touch is a crucial sense when using our hands in complex tasks. Some tasks we learn to do even without sight by just using the sense of touch in our fingers and hands. Modern touchscreen devices, however, have lost some of that tactile feeling while removing physical controls from the interaction. Touch is also a sense that is underutilized in interactions with technology and could provide new ways of interaction to support users. While users are using information technology in certain situations, they cannot visually and mentally focus completely during the interaction. Humans can utilize their sense of touch more comprehensively in interactions and learn to understand tactile information while interacting with information technology. This thesis introduces a set of experiments that evaluate human capabilities to understand and notice tactile information provided by current actuator technology and further introduces a couple of examples of haptic user interfaces (HUIs) to use under eyes-free use scenarios. These experiments evaluate the benefits of such interfaces for users and concludes with some guidelines and methods for how to create this kind of user interfaces. The experiments in this thesis can be divided into three groups. In the first group, with the first two experiments, the detection of vibrotactile stimuli and interpretation of the abstract meaning of vibrotactile feedback was evaluated. Experiments in the second group evaluated how to design rhythmic vibrotactile tactons to be basic vibrotactile primitives for HUIs. The last group of two experiments evaluated how these HUIs benefit the users in the distracted and eyes-free interaction scenarios. The primary aim for this series of experiments was to evaluate if utilizing the current level of actuation technology could be used more comprehensively than in current-day solutions with simple haptic alerts and notifications. Thus, to find out if the comprehensive use of vibrotactile feedback in interactions would provide additional benefits for the users, compared to the current level of haptic interaction methods and nonhaptic interaction methods. The main finding of this research is that while using more comprehensive HUIs in eyes-free distracted-use scenarios, such as while driving a car, the user’s main task, driving, is performed better. Furthermore, users liked the comprehensively haptified user interfaces

    Haptic Socks

    Get PDF
    Smart phones have been spreading faster than any other technologies and with them, the research for finding different alternate interaction techniques that will be quick and efficiently responsive. The best feedback response in terms of mobile usage has been a long debate and is actually not possible, so it is difficult to conclude which feedback will be effective in all environments. In navigation applications, two modalities are used as feedback; visual and auditory. This thesis presents work, experiments and results on implementing the third modality i.e. haptic feedback. The basic purpose of this work is to find out how effective wearable haptic feedback can be, than visual or auditory feedback in terms of navigation. Using hand-held GPS navigation while walking or driving a car can sometimes be dangerous if the user is focusing more on the device than on the roads. The concept of Haptic Socks can be used as a secondary interaction technique for navigation so that user can use other interaction techniques to perform their primary tasks or perform their daily life routine work. Haptic socks will consist of actuators embedded in a certain position of human foot that will give tactile feedback, helping the user in turn by turn navigation. The device can most probably be the user’s smartphone. Haptic Socks will use wireless connection with the device which in this research study will be Bluetooth. Furthermore, if the feedback results are positive then it will be easier to discuss how effective it can be made for people who are deaf-blind

    Quantifying Cognitive Efficiency of Display in Human-Machine Systems

    Get PDF
    As a side effect of fast growing informational technology, information overload becomes prevalent in the operation of many human-machine systems. Overwhelming information can degrade operational performance because it imposes large mental workload on human operators. One way to address this issue is to improve the cognitive efficiency of display. A cognitively efficient display should be more informative while demanding less mental resources so that an operator can process larger displayed information using their limited working memory and achieve better performance. In order to quantitatively evaluate this display property, a Cognitive Efficiency (CE) metric is formulated as the ratio of the measures of two dimensions: display informativeness and required mental resources (each dimension can be affected by display, human, and contextual factors). The first segment of the dissertation discusses the available measurement techniques to construct the CE metric and initially validates the CE metric with basic discrete displays. The second segment demonstrates that displays showing higher cognitive efficiency improve multitask performance. This part also identifies the version of the CE metric that is the most predictive of multitask performance. The last segment of the dissertation applies the CE metric in driving scenarios to evaluate novel speedometer displays; however, it finds that the most efficient display may not better enhance concurrent tracking performance in driving. Although the findings of dissertation show several limitations, they provide valuable insight into the complicated relationship among display, human cognition, and multitask performance in human-machine systems

    Effectiveness of Vibration-based Haptic Feedback Effects for 3D Object Manipulation

    Get PDF
    This research explores the development of vibration-based haptic feedback for a mouse-like computer input device. The haptic feedback is intended to be used in 3D virtual environments to provide users of the environment with information that is difficult to convey visually, such as collisions between objects. Previous research into vibrotactile haptic feedback can generally be split into two broad categories: single tactor handheld devices; and multiple tactor devices that are attached to the body. This research details the development of a vibrotactile feedback device that merges the two categories, creating a handheld device with multiple tactors. Building on previous research, a prototype device was developed. The device consisted of a semi-sphere with a radius of 34 mm, mounted on a PVC disk with a radius of 34 mm and a height of 18 mm. Four tactors were placed equidistantly about the equator of the PVC disk. Unfortunately, vibrations from a single tactor caused the entire device to shake due to the rigid plastic housing for the tactors. This made it difficult to accurately detect which tactor was vibrating. A second prototype was therefore developed with tactors attached to elastic bands. When a tactor vibrates, the elastic bands dampen the vibration, reducing the vibration in the rest of the device. The goal of the second prototype was to increase the accuracy in localizing the vibrating tactor. An experiment was performed to compare the two devices. The study participants grasped one of the device prototypes as they would hold a computer mouse. During each trial, a random tactor would vibrate. By pushing a key on the keyboard, the participants indicated when they detected vibration. They then pushed another key to indicate which tactor had been vibrating. The procedure was then repeated for the other device. Detection of the vibration was faster (p < 0.01) and more accurate (p < 0.001) with the soft shell design than with the hard shell design. In a post-experiment questionnaire, participants preferred the soft shell design to the hard shell design. Based on the results of the experiment, a mould was created for building future prototypes. The mould allows for the rapid creation of devices from silicone. Silicone was chosen as a material because it can easily be moulded and is available in different levels of hardness. The hardness of the silicone can be used to control the amount of damping of the vibrations. To increase the vibration damping, a softer silicone can be used. Several recommendations for future prototypes and experiments are made

    Évaluation d'un risque de chute selon les paramètres de la démarche et étude d'une méthode de transmission à l'aide de stimuli tactiles

    Get PDF
    D'après les statistiques, près du tiers des personnes de 65 ans et plus tombent au moins une fois par année, causant ainsi 60 % des blessures dans ce groupe d'âge. En plus des dommages physiques, ces chutes peuvent causer un impact psychologique dû à la peur de tomber à nouveau. En conséquence, même sans blessure, une chute peut entrainer une perte de confiance et une réduction de la mobilité. Ainsi, ce constat souligne le besoin de fournir des moyens de prévention des chutes. Aux meilleures de notre connaissance, bien qu'il existe de nombreux programmes de prévention de chute, il semble qu'aucun d'entre eux ne fournisse une assistance en temps réel aux personnes à risque de chute. Dans cette optique, deux chercheurs de l'UQAC ont développé une chaussure instrumentée qui permet la détection des risques et l'avertissement à l'utilisateur. Ainsi, ce mémoire s'intéresse à la capacité de ce système pour la détection et l'avertissement. Plus particulièrement, la problématique abordée est de prévenir une chute lors de la marche d'une personne âgée. Elle est abordée sous trois volets : la détection, l'avertissement, et l'apprentissage. Dans le premier volet de ce mémoire, différents facteurs affectant le risque de chutes sont exposés, incluant le type de sol, l'angle du sol, la température extérieure, le taux d'humidité, la fatigue, la médication, la peur de tomber et l'équilibre. Néanmoins, un intérêt particulier est porté sur la démarche, l'un des principaux facteurs de chutes. Ainsi, pour définir le niveau de risque de chute associé à la démarche de l'utilisateur, trois algorithmes sont présentés et évalués. Dans le premier, soit le modèle STAT, certains paramètres de la démarche sont comparés avec leur moyenne statistique respective. Ainsi, plus le pas est différent de la démarche normale de l'utilisateur, plus l'indice de risque augmente. Les deux autres algorithmes, les modèles ANN-RT et ANN-S, utilisent un réseau de neurones pour traiter quatre paramètres de la démarche. La différence entre ces deux modèles réside dans la sélection des paramètres : alors que le modèle ANN-RT utilise des paramètres calculés sur une courte période (dix millisecondes), le modèle ANN-S calcule les paramètres à partir d'une enjambée complète. Par deux évaluations, les trois algorithmes ont démontrés leur capacité à déceler de grands changements dans la démarche ainsi que leur habileté à détecter des variations de la démarche induites par une diminution de l'acuité visuelle. Les résultats préliminaires confirment ainsi l'utilisation des trois algorithmes. Le deuxième volet de ce mémoire s'intéresse à l'avertissement du niveau de risque via des stimuli tactiles sous les pieds, déchargeant ainsi les canaux visuels et auditifs. Plus particulièrement, les travaux sont orientés vers une étude de la perception tactile sous les pieds afin de sélectionner un ensemble de messages tactiles facilement différentiables. Pour ce faire, une carte perceptuelle est d'abord obtenue par une analyse par échelonnement multidimensionnel (Multidimensional scaling), de laquelle six messages tactiles les plus différentiables sont extraits. Le test de validation effectué sur cet ensemble montre un taux de reconnaissance des messages supérieur à 50 % pour un temps d'apprentissage moyen de moins de quatre minutes, ce qui démontre l'utilité de la technique pour la sélection de messages tactiles différentiables. Le troisième volet s'attaque à l'apprentissage et la reconnaissance des messages tactiles. Pour ce faire, il est proposé de dissimuler à l'intérieur d'un jeu sérieux un programme d'entrainement journalier destiné à l'apprentissage d'un grand nombre de messages tactiles et à l'amélioration de la vitesse de reconnaissance de ces derniers. Le jeu est composé en trois parties : les activités, la récompense, et l'association des messages. Ainsi, par les différentes activités journalières planifiées, le joueur est invité à reconnaître la signification des messages tactiles présentés. De plus, pour augmenter l'engagement de l'utilisateur, une récompense sous forme de mots croisés est donnée en fonction du taux de succès et de la constance du joueur. Une version simplifiée du jeu a été testée auprès de quatre participants pendant huit jours. Durant cette période, une augmentation de la vitesse de reconnaissance des tactons a été observée. En résumé, les travaux présentés dans ce mémoire présentent les bases pour l'utilisation d'une chaussure intelligente visant à fournir une assistance immédiate aux personnes à risque de chute. Il a été démontré, par trois algorithmes, qu'un niveau de risque de chute peut être obtenu par l'analyse des anomalies de la démarche. De même, l'étude perceptuelle a validé l'emploi de messages tactiles pour la communication du niveau de risque de chutes. Elle a également approuvé l'utilisation d'une carte perceptuelle pour la sélection de messages tactiles différentiables. Pour finir, le jeu sérieux présenté propose un programme d'entrainement complet pour l'apprentissage de messages tactiles, une étape essentielle avant l'utilisation d'une telle chaussure dans un environnement non contrôlé. De ce fait, les travaux futurs pourront se concentrer sur l'amélioration du système, notamment par l'ajout de différents capteurs et actionneurs ainsi que par l'étude et la détection des différents facteurs de risques énoncés dans ce mémoire
    corecore