80 research outputs found

    How touch and hearing influence visual processing in sensory substitution, synaesthesia and cross-modal correspondences

    Get PDF
    Sensory substitution devices (SSDs) systematically turn visual dimensions into patterns of tactile or auditory stimulation. After training, a user of these devices learns to translate these audio or tactile sensations back into a mental visual picture. Most previous SSDs translate greyscale images using intuitive cross-sensory mappings to help users learn the devices. However more recent SSDs have started to incorporate additional colour dimensions such as saturation and hue. Chapter two examines how previous SSDs have translated the complexities of colour into hearing or touch. The chapter explores if colour is useful for SSD users, how SSD and veridical colour perception differ and how optimal cross-sensory mappings might be considered. After long-term training, some blind users of SSDs report visual sensations from tactile or auditory stimulation. A related phenomena is that of synaesthesia, a condition where stimulation of one modality (i.e. touch) produces an automatic, consistent and vivid sensation in another modality (i.e. vision). Tactile-visual synaesthesia is an extremely rare variant that can shed light on how the tactile-visual system is altered when touch can elicit visual sensations. Chapter three reports a series of investigations on the tactile discrimination abilities and phenomenology of tactile-vision synaesthetes, alongside questionnaire data from synaesthetes unavailable for testing. Chapter four introduces a new SSD to test if the presentation of colour information in sensory substitution affects object and colour discrimination. Chapter five presents experiments on intuitive auditory-colour mappings across a wide variety of sounds. These findings are used to predict the reported colour hallucinations resulting from LSD use while listening to these sounds. Chapter six uses a new sensory substitution device designed to test the utility of these intuitive sound-colour links for visual processing. These findings are discussed with reference to how cross-sensory links, LSD and synaesthesia can inform optimal SSD design for visual processing

    Seeing a singer helps comprehension of the song's lyrics

    Get PDF
    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Multisensory Integration Design in Music for Cochlear Implant Users

    Get PDF
    Cochlear implant (CI) users experience several challenges when listening to music. However, their hearing abilities are greatly diverse and their musical experiences may significantly vary from each other. In this research, we investigate this diversity in CI users' musical experience, preferences, and practices. We integrate multisensory feedback into their listening experiences to support the perception of specific musical features and elements. Four installations are implemented, each exploring a different sensory modality assisting or supporting CI users' listening experience. We study these installations throughout semi-structured and exploratory workshops with participants. We report the results of our process-oriented assessment of CI users' experience with music. Because the CI community is a minority participant group in music, musical instrument design frameworks and practices vary from those of hearing cultures. We share guidelines for designing multisensory integration that derived from our studies with individual CI users and specifically aimed to enrich their experiences

    Investigating the Effects of Physiology-driven Vibro-tactile Biofeedback for Mitigating State Anxiety during Public Speaking

    Get PDF
    For some, public speaking can cause heightened moments of stress while giving a speech or presentation. These moments are quantifiable through one’s physiology and vocal characteristics, measurable through sensor-enabled smart technology. Through these measurements, we can assess the current state of the individual to determine opportune moments to deliver interventions that alleviate symptoms of stressful moments. Recent work in wrist-worn vibrotactile biofeedback suggests that it is a promising intervention towards reducing state-based anxiety for public speaking. However, since the vibrotactile stimulus is delivered constantly, adaptation could risk diminishing relieving effects. Therefore, we administer vibrotactile biofeedback as a just-in-time adaptive intervention during in-the-moment heightened levels of stress. We evaluate two types of vibrotactile feedback delivery mechanisms in a between-subjects design – one that delivers stimulus randomly and one that delivers stimulus during moments of heightened physiological reactivity, as determined by changes in electrodermal activity. The results from these interventions indicate that vibrotactile biofeedback administered during high physiological arousal appears to improve stress-related measures early on, but these effects diminish over time. However, we also observe no significant differences in self-reported state anxiety scores between experiment groups. In the latter half of this thesis, we will explore methods for personalizing machine learning models that detect the onset of heightened moments of stress in real-time. Results indicate that baseline-norming, fine-tuning on participant-specific data, and providing individual-specific trait information are all helpful techniques for improving stress detection performance

    The role of somatosensation in vocal motor control for singing

    Get PDF
    Extensive research on the human voice with its sensory and motor systems has converged on the idea that the auditory system is critical for vocal production, yet recent reports suggest that the somatosensory system contributes more substantially to vocal motor-control than currently recognized. This thesis assessed the modulator influence of primary somatosensory cortex (S1) on vocal pitch-matching with transcranial magnetic stimulation, applied to right larynx-S1 and a dorsal-S1 control area in untrained singers. In experiment I, participants sang before and after TMS with normal auditory feedback whereas in experiment II, auditory feedback was masked with noise. TMS showed no effects on singing in experiment I. However, when auditory feedback was masked, larynx-S1 stimulation significantly improved both initial pitch accuracy and final pitch stability in contrast to dorsal-S1 stimulation. Positive effects of larynx S1 stimulation on initial and final pitch accuracy were more pronounced in participants who sang less accurately prior to iTBS. Moreover, masking showed more adverse effects on pitch-control in participants with higher pitch-discrimination thresholds. Conversely, these participants also profited more from larynx-S1 stimulation in initial and final-pitch accuracy. These data provide first evidence for a critical involvement of larynx-S1 in pitch motor-control independent from prior singing experience
    • 

    corecore