160 research outputs found

    Crossmodal audio and tactile interaction with mobile touchscreens

    Get PDF
    Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems

    Mapping information to audio and tactile icons

    Full text link
    We report the results of a study focusing on the meanings that can be conveyed by audio and tactile icons. Our research considers the following question: how can audio and tactile icons be designed to optimise congruence between crossmodal feedback and the type of information this feedback is intended to convey? For example, if we have a set of system warnings, confirmations, progress up-dates and errors: what audio and tactile representations best match the information or type of message? Is one modality more appropriate at presenting certain types of information than the other modality? The results of this study indicate that certain parameters of the audio and tactile modalities such as rhythm, texture and tempo play an important role in the creation of congruent sets of feedback when given a specific type of information to transmit. We argue that a combination of audio or tactile parameters derived from our results allows the same type of information to be derived through touch and sound with an intuitive match to the content of the message

    Haptic and audio interaction design

    Get PDF
    International audienceHaptics, audio and human-computer interaction are three scientific disciplines that share interests, issues and methodologies. Despite these common points, interaction between these communities are sparse, because each of them have their own publication venues, meeting places, etc. A venue to foster interaction between these three communities was created in 2006, the Haptic and Audio Interaction Design workshop (HAID), aiming to provide a meeting place for researchers in these areas. HAID was carried out yearly from 2006 to 2013, then discontinued. Having worked in the intersection of these areas for several years, we felt the need to revive this event and decided to organize a HAID edition in 2019 in Lille, France. HAID 2019 was attended by more than 100 university, industry and artistic researchers and practitioners, showing the continued interest for such a unique venue. This special issue gathers extended versions of a selection of papers presented at the 2019 workshop. These papers focus on several directions of research on haptics, audio and HCI, including perceptual studies and the design, evaluation and use of vibrotactile and force-feedback devices in audio, musical and game applications

    Is Multimedia Multisensorial? - A Review of Mulsemedia Systems

    Get PDF
    © 2018 Copyright held by the owner/author(s). Mulsemedia - multiple sensorial media - makes possible the inclusion of layered sensory stimulation and interaction through multiple sensory channels. e recent upsurge in technology and wearables provides mulsemedia researchers a vehicle for potentially boundless choice. However, in order to build systems that integrate various senses, there are still some issues that need to be addressed. is review deals with mulsemedia topics remained insu ciently explored by previous work, with a focus on multi-multi (multiple media - multiple senses) perspective, where multiple types of media engage multiple senses. Moreover, it addresses the evolution of previously identi ed challenges in this area and formulates new exploration directions.This article was funded by the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement no. 688503

    Multimodaalinen joustavuus mobiilissa tekstinsyöttötehtävässä

    Get PDF
    Mobiili käytettävyys riippuu informaation määrästä jonka käyttäjä pystyy tavoittamaan ja välittämään käyttöliittymän avulla liikkeellä ollessaan. Informaation siirtokapasiteetti ja onnistunut siirto taas riippuvat siitä, kuinka joustavasti käyttöliittymää voi käyttää erilaisissa mobiileissa käyttökonteksteissa. Multimodaalisen joustavuuden tutkimus on keskittynyt lähinnä modaliteettien hyödyntämistapoihin ja niiden integrointiin käyttöliittymiin. Useimmat evaluoivat tutkimukset multimodaalisen joustavuuden alueella mittaavat vuorovaikutusten vaikutuksia toisiinsa. Kuitenkin ongelmana on, että ensinnäkään käyttöliittymän suorituksen arviointi tietyssä kontekstissa ei yleisty muihin mahdollisiin konteksteihin, ja toiseksi, suorituksen vertaaminen tilanteeseen jossa kahta tehtävää suoritetaan samanaikaisesti, paljastaa ennemminkin tehtävien välillä vallitsevan tasapainoilun, kuin itse vuorovaikutusten vaikutukset. Vastatakseen näihin ongelmiin multimodaalisen joustavuuden mittaamisessa, tämä diplomityö eristää modaliteettien hyödyntämisen vaikutuksen vuorovaikutuksessa mobiilin käyttöliittymän kanssa. Samanaikaisten, toissijaisten tehtävien sijaan modaliteettien hyödyntämisen mahdollisuus suljetaan kokonaan vuorovaikutuksesta. Multimodaalisen joustavuuden arvioinnin metodia [1] käytettiin tutkimuksessa osoittamaan kolmen aistikanavan (näön, kuulon ja tunnon) käyttöasteita mobiilissa tekstinsyöttötehtävässä kolmella laitteella; ITU-12 näppäimistöllä, sekä fyysisellä ja kosketusnäytöllisellä Qwerty -näppäimistöllä. Työn tavoitteena oli määrittää näiden käyttöliittymien multimodaalinen joustavuus ja yksittäisten aistikanavien arvo vuorovaikutukselle, sekä tutkia aistien yhteistoimintaa tekstinsyöttötehtävässä. Tutkimuksen tulokset osoittavat, että huolimatta ITU-12 näppäimistön hitaudesta kirjoittaa häiriöttömässä tilassa, sillä on ylivertainen mukautumiskyky toimia erilaisten häiriöiden vaikuttaessa, kuten oikeissa mobiileissa konteksteissa. Kaikki käyttöliittymät todettiin hyvin riippuvaisiksi näöstä. Qwerty -näppäimistöjen suoriutuminen heikkeni yli 80% kun näkö suljettiin vuorovaikutukselta. ITU-12 oli vähiten riippuvainen näöstä, suorituksen heiketessä noin 50 %. Aistikanavien toiminnan tarkastelu tekstinsyöttötehtävässä vihjaa, että näkö ja tunto toimivat yhdessä lisäten suorituskykyä jopa enemmän kuin käytettynä erikseen. Auraalinen palaute sen sijaan ei näyttänyt tuovan lisäarvoa vuorovaikutukseen lainkaan.The mobile usability of an interface depends on the amount of information a user is able to retrieve or transmit while on the move. Furthermore, the information transmission capacity and successful transmissions depend on how flexibly usable the interface is across varying real world contexts. Major focus in research of multimodal flexibility has been on facilitation of modalities to the interface. Most evaluative studies have measured effects that the interactions cause to each other. However, assessing these effects under a limited number of conditions does not generalize to other possible conditions in the real world. Moreover, studies have often compared single-task conditions to dual-tasking, measuring the trade-off between the tasks, not the actual effects the interactions cause. To contribute to the paradigm of measuring multimodal flexibility, this thesis isolates the effect of modality utilization in the interaction with the interface; instead of using a secondary task, modalities are withdrawn from the interaction. The multimodal flexibility method [1] was applied in this study to assess the utilization of three sensory modalities (vision, audition and tactition) in a text input task with three mobile interfaces; a 12-digit keypad, a physical Qwerty-keyboard and a touch screen virtual Qwerty-keyboard. The goal of the study was to compare multimodal flexibility of these interfaces, assess the values of utilized sensory modalities to the interaction, and examine the cooperation of modalities in a text input task. The results imply that the alphabetical 12-digit keypad is the multimodally most flexible of the three compared interfaces. Although the 12-digit keypad is relatively inefficient to type when all modalities are free to be allocated to the interaction, it is the most flexible in performing under constraints that the real world might set on sensory modalities. In addition, all the interfaces are shown to be highly dependent on vision. The performance of both Qwerty-keyboards dropped by approximately 80% as a result of withdrawing the vision from the interaction, and the performance of ITU-12 suffered approximately 50%. Examining cooperation of the modalities in the text input task, vision was shown to work in synergy with tactition, but audition did not provide any extra value for the interaction
    corecore