291 research outputs found

    Crossmodal audio and tactile interaction with mobile touchscreens

    Get PDF
    Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems

    Music Listening, Music Therapy, Phenomenology and Neuroscience

    Get PDF

    An integrative computational modelling of music structure apprehension

    Get PDF

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye

    ESCOM 2017 Proceedings

    Get PDF

    Measuring the effects of display design and individual differences on the utilization of multi-stream sonifications

    Get PDF
    Previous work in the auditory display community has discussed the impact of both display design and individual listener differences on how successfully listeners can use a sonification. This dissertation extends past findings and explores the effects of display and individual differences on listeners’ ability to utilize a sonification for an analytical listening task when multiple variables are presented simultaneously. This is considered a more complicated task and pushes listeners’ perceptual abilities, but is necessary when wanting to use sonifications to display more detailed information about a dataset. The study used a two by two between- subjects approach to measure the effects of display design and domain mapping. Acoustic parameters were assigned to either the weather or the health domain, and these mappings were either created by an expert sound designer or arbitrarily assigned. The acoustic parameters were originally selected for the weather domain, so those display conditions were expected to result in higher listener accuracy. Results showed that the expert mapped weather sonification led to higher mean listener accuracy than the arbitrarily mapped health display when listeners did not have time to practice, however with less than an hour of practice the significant main effects of design and domain mapping went away and mean accuracy scores increased to a similar level. This dissertation introduces two models for predicting listener accuracy scores, the first model uses musical sophistication and self-reported motivation scores to predict listener accuracy on the task before practice. The second model uses musical sophistication, self-reported motivation, and listening discrimination scores to predict listener accuracy on the sonification task after practice.Ph.D
    • …
    corecore