475 research outputs found

    Enhancing the use of Haptic Devices in Education and Entertainment

    Get PDF
    This research was part of the two-years Horizon 2020 European Project "weDRAW". The aim of the project was that "specific sensory systems have specific roles to learn specific concepts". This work explores the use of the haptic modality, stimulated by the means of force-feedback devices, to convey abstract concepts inside virtual reality. After a review of the current use of haptic devices in education, available haptic software and game engines, we focus on the implementation of an haptic plugin for game engines (HPGE, based on state of the art rendering library CHAI3D) and its evaluation in human perception experiments and multisensory integration

    Tactile discrimination of material properties: application to virtual buttons for professional appliances

    Get PDF
    An experiment is described that tested the possibility to classify wooden, plastic, and metallic objects based on reproduced auditory and vibrotactile stimuli. The results show that recognition rates are considerably above chance level with either unimodal auditory or vibrotactile feedback. Supported by those findings, the possibility to render virtual buttons for professional appliances with different tactile properties was tested. To this end, a touchscreen device was provided with various types of vibrotactile feedback in response to the sensed pressing force and location of a finger. Different virtual buttons designs were tested by user panels who performed a subjective evaluation on perceived tactile properties and materials. In a first implementation, virtual buttons were designed reproducing the vibration recordings of real materials used in the classification experiment: mainly due to hardware limitations of our prototype and the consequent impossibility to render complex vibratory signals, this approach did not prove successful. A second implementation was then optimized for the device capabilities, moreover introducing surface compliance effects and button release cues: the new design led to generally high quality ratings, clear discrimination of different buttons and unambiguous material classification. The lesson learned was that various material and physical properties of virtual buttons can be successfully rendered by characteristic frequency and decay cues if correctly reproduced by the device

    A multi-modal interface for road planning tasks using vision, haptics and sound

    Get PDF
    The planning of transportation infrastructure requires analyzing many different types of geo-spatial information in the form of maps. Displaying too many of these maps at the same time can lead to visual clutter or information overload, which results in sub-optimal effectiveness. Multimodal interfaces (MMIs) try to address this visual overload and improve the user\u27s interaction with large amounts of data by combining several sensory modalities. Previous research into MMIs seems to indicate that using multiple sensory modalities leads to more efficient human-computer interactions when used properly. The motivation from this previous work has lead to the creation of this thesis, which describes a novel GIS system for road planning using vision, haptics and sound. The implementation of this virtual environment is discussed, including some of the design decisions used when trying to ascertain how we map visual data to our other senses. A user study was performed to see how this type of system could be utilized, and the results of the study are presented

    Crossmodal audio and tactile interaction with mobile touchscreens

    Get PDF
    Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems

    It Sounds Cool: Exploring Sonification of Mid-Air Haptic Textures Exploration on Texture Judgments, Body Perception, and Motor Behaviour

    Get PDF
    Ultrasonic mid-air haptic technology allows for the perceptual rendering of textured surfaces onto the user's hand. Unlike real textured surfaces, however, mid-air haptic feedback lacks implicit multisensory cues needed to reliably infer a texture's attributes (e.g., its roughness). In this paper, we combined mid-air haptic textures with congruent sound feedback to investigate how sonification could influence people's (1) explicit judgment of the texture attributes, (2) explicit sensations of their own hand, and (3) implicit motor behavior during haptic exploration. Our results showed that audio cues (presented solely or combined with haptics) influenced participants' judgment of the texture attributes (roughness, hardness, moisture and viscosity), produced some hand sensations (the feeling of having a hand smoother, softer, looser, more flexible, colder, wetter and more natural), and changed participants' speed (moving faster or slower) while exploring the texture. We then conducted a principal component analysis to better understand and visualize the found results and conclude with a short discussion on how audio-haptic associations can be used to create embodied experiences in emerging application scenarios in the metaverse

    Somatic ABC's: A Theoretical Framework for Designing, Developing and Evaluating the Building Blocks of Touch-Based Information Delivery

    Get PDF
    abstract: Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.Dissertation/ThesisPh.D. Computer Science 201
    • …
    corecore