75 research outputs found

    Haptic and audio interaction design

    Get PDF
    International audienceHaptics, audio and human-computer interaction are three scientific disciplines that share interests, issues and methodologies. Despite these common points, interaction between these communities are sparse, because each of them have their own publication venues, meeting places, etc. A venue to foster interaction between these three communities was created in 2006, the Haptic and Audio Interaction Design workshop (HAID), aiming to provide a meeting place for researchers in these areas. HAID was carried out yearly from 2006 to 2013, then discontinued. Having worked in the intersection of these areas for several years, we felt the need to revive this event and decided to organize a HAID edition in 2019 in Lille, France. HAID 2019 was attended by more than 100 university, industry and artistic researchers and practitioners, showing the continued interest for such a unique venue. This special issue gathers extended versions of a selection of papers presented at the 2019 workshop. These papers focus on several directions of research on haptics, audio and HCI, including perceptual studies and the design, evaluation and use of vibrotactile and force-feedback devices in audio, musical and game applications

    Crossmodal audio and tactile interaction with mobile touchscreens

    Get PDF
    Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems

    DOKY: A Multi-Modal User Interface for Non-Visual Presentation, Navigation and Manipulation of Structured Documents on Mobile and Wearable Devices

    Get PDF
    There are a large number of highly structured documents available on the Internet. The logical document structure is very important for the reader in order to efficiently handling the document content. In graphical user interfaces, each logical structure element is presented by a specific visualisation, a graphical icon. This representation allows visual readers to recognise the structure at a glance. Another advantage is that it enables direct navigation and manipulation. Blind and visually impaired persons are unable to use graphical user interfaces and for the emerging category of mobile and wearable devices, where there are only small visual displays available or no visual display at all, a non-visual alternative is required too. A multi-modal user interface for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices like smart phones, smart watches or smart tablets has been developed as a result of inductive research among 205 blind and visually impaired participants. It enables the user to get a fast overview over the document structure and to efficiently skim and scan over the document content by identifying the type, level, position, length, relationship and content text of each element as well as to focus, select, activate, move, remove and insert structure elements or text. These interactions are presented in a non-visual way using Earcons, Tactons and synthetic speech utterances, serving the auditory and tactile human sense. Navigation and manipulation is provided by using the multitouch, motion (linear acceleration and rotation) or speech recognition input modality. It is a complete solution for reading, creating and editing structured documents in a non-visual way. There is no special hardware required. The name DOKY is derived from a short form of the terms document, and accessibility. A flexible platform-independent and event-driven software architecture implementing the DOKY user interface as well as the automated structured observation research method employed for the investigation into the effectiveness of the proposed user interface has been presented. Because it is platform- and language-neutral, it can be used in a wide variety of platforms, environments and applications for mobile and wearable devices. Each component is defined by interfaces and abstract classes only, so that it can be easily changed or extended, and grouped in a semantically self-containing package. An investigation into the effectiveness of the proposed DOKY user interface has been carried out to see whether the proposed user interface design concepts and user interaction design concepts are effective means for non-visual presentation, navigation and manipulation of structured documents on mobile and wearable devices, by automated structured observations of 876 blind and visually impaired research subjects performing 19 exercises among a highly structured example document using the DOKY Structured Observation App on their own mobile or wearable device remotely over the Internet. The results showed that the proposed user interface design concepts for presentation and navigation and the user interaction design concepts for manipulation are effective and that their effectiveness depends on the input modality and hardware device employed as well as on the use of screen readers

    Printgets: an Open-Source Toolbox for Designing Vibrotactile Widgets with Industrial-Grade Printed Actuators and Sensors

    Get PDF
    International audienceNew technologies for printing sensors and actuators combine the flexibility of interface layouts of touchscreens with localized vibrotactile feedback, but their fabrication still requires industrial-grade facilities. Until these technologies become easily replicable, interaction designers need material for ideation. We propose an open-source hardware and software toolbox providing maker-grade tools for iterative design of vibrotactile widgets with industrial-grade printed sensors and actuators. Our hardware toolbox provides a mechanical structure to clamp and stretch printed sheets, and electronic boards to drive sensors and actuators. Our software toolbox expands the design space of haptic interaction techniques by reusing the wide palette of available audio processing algorithms to generate real-time vibrotactile signals. We validate our toolbox with the implementation of three exemplar interface elements with tactile feedback: buttons, sliders, touchpads

    Somatic ABC's: A Theoretical Framework for Designing, Developing and Evaluating the Building Blocks of Touch-Based Information Delivery

    Get PDF
    abstract: Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.Dissertation/ThesisPh.D. Computer Science 201

    Visual-Tactile Image Representation For The Visually Impaired Using Braille Device

    Get PDF
    Nowadays Internet usage is dramatically increasing all over the world and the information dissemination and acquisition is easier for sighted users. Unfortunately, visually impaired are still facing difficulties in interaction with websites. Particularly, screen reader is unable to facilitate disabled users to identify images such as basic geometric shapes. Inability to identify the shapes displayed on the screen creates restriction to interact and comprehend the content of websites for visually impaired. Thus, this project examines earlier researches and eases the web interaction of the blind people by identifying the shape of visual image converted into tactile representation using Braille device. For further investigation of the hypotheses, qualitative and quantitative method is used. The study findings are addressed to build a system that tackles the issue that screen reader is unable to address. System evaluation is executed upon producing the prototype of the system which comprises of user testing. The system is expected to improve understanding the content of webpage and enhance the interaction of visually impaired with web. Future recommendations and further findings will be discussed when system prototype milestone is fulfilled

    Non-visual feedback for pen-based interaction with digital graphs

    Full text link

    Interactive maps for visually impaired people : design, usability and spatial cognition

    Get PDF
    Connaître la géographie de son environnement urbain est un enjeu important pour les personnes déficientes visuelles. Des cartes tactiles en relief sont généralement utilisées mais elles présentent des limitations importantes (nombre limité d'informations, recours à une légende braille). Les nouvelles technologies permettent d'envisager des solutions innovantes. Nous avons conçu et développé une carte interactive accessible, en suivant un processus de conception participative. Cette carte est basée sur un dispositif multi-touch, une carte tactile en relief et une sortie sonore. Ce dispositif permet au sujet de recueillir des informations en double-cliquant sur certains objets de la carte. Nous avons démontré expérimentalement que ce prototype était plus efficace et plus satisfaisant pour des utilisateurs déficients visuels qu'une carte tactile simple. Nous avons également exploré et testé différents types d'interactions avancées accessibles pour explorer la carte. Cette thèse démontre l'importance des cartes tactiles interactives pour les déficients visuels et leur cognition spatiale.Knowing the geography of an urban environment is crucial for visually impaired people. Tactile relief maps are generally used, but they retain significant limitations (limited amount of information, use of braille legend, etc.). Recent technological progress allows the development of innovative solutions which overcome these limitations. In this thesis, we present the design of an accessible interactive map through a participatory design process. This map is composed by a multi-touch screen with tactile map overlay and speech output. It provides auditory information when tapping on map elements. We have demonstrated in an experiment that our prototype was more effective and satisfactory for visually impaired users than a simple raised-line map. We also explored and tested different types of advanced non-visual interaction for exploring the map. This thesis demonstrates the importance of interactive tactile maps for visually impaired people and their spatial cognition

    Voice and Touch Diagrams (VATagrams) Diagrams for the Visually Impaired

    Get PDF
    If a picture is worth a thousand words would you rather read the two pages of text or simply view the image? Most would choose to view the image; however, for the visually impaired this isn’t always an option. Diagrams assist people in visualizing relationships between objects. Most often these diagrams act as a source for quickly referencing information about relationships. Diagrams are highly visual and as such, there are few tools to support diagram creation for visually impaired individuals. To allow the visually impaired the ability to share the same advantages in school and work as sighted colleagues, an accessible diagram tool is needed. A suitable tool for the visually impaired to create diagrams should allow these individuals to: 1. easily define the type of relationship based diagram to be created, 2. easily create the components of a relationship based diagram, 3. easily modify the components of a relationship based diagram, 4. quickly understand the structure of a relationship based diagram, 5. create a visual representation which can be used by the sighted, and 6. easily accesses reference points for tracking diagram components. To do this a series of prototypes of a tool were developed that allow visually impaired users the ability to read, create, modify and share relationship based diagrams using sound and gestural touches. This was accomplished by creating a series of applications that could be run on an iPad using an overlay that restricts the areas in which a user can perform gestures. These prototypes were tested for usability using measures of efficiency, effectiveness and satisfaction. The prototypes were tested with visually impaired, blindfolded and sighted participants. The results of the evaluation indicate that the prototypes contain the main building blocks that can be used to complete a fully functioning application to be used on an iPad
    • …
    corecore