528 research outputs found
Tactile cognition in rodents
Since the discovery 50 years ago of the precisely ordered representation of the whiskers in somatosensory cortex, the rodent tactile sensory system has been a fertile ground for the study of sensory processing. With the growing sophistication of touch-based behavioral paradigms, together with advances in neurophysiological methodology, a new approach is emerging. By posing increasingly complex perceptual and memory problems, in many cases analogous to human psychophysical tasks, investigators now explore the operations underlying rodent problem solving. We define the neural basis of tactile cognition as the transformation from a stage in which neuronal activity encodes elemental features, local in space and in time, to a stage in which neuronal activity is an explicit representation of the behavioral operations underlying the current task. Selecting a set of whisker-based behavioral tasks, we show that rodents achieve high level performance through the workings of neuronal cir-cuits that are accessible, decodable, and manipulatable. As a means towards exploring tactile cognition, this review presents leading psychophysical paradigms and, where known, their neural correlates
Engineering data compendium. Human perception and performance. User's guide
The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
Multisensory perception and decision-making with a new sensory skill
It is clear that people can learn a new sensory skill – a new way of mapping sensory inputs onto world states. It remains unclear how flexibly a new sensory skill can become embedded in multisensory perception and decision-making. To address this, we trained typically-sighted participants (N=12) to use a new echo-like auditory cue to distance in a virtual world, together with a noisy visual cue. Using model-based analyses, we tested for key markers of efficient multisensory perception and decision-making with the new skill. We found that twelve of fourteen participants learned to judge distance using the novel auditory cue. Their use of this new sensory skill showed three key features: (1) it enhanced the speed of timed decisions; (2) it largely resisted interference from a simultaneous digit span task; and (3) it integrated with vision in a Bayes-like manner to improve precision. We also show some limits following this relatively short training: precision benefits were lower than the Bayesoptimal prediction, and there was no forced fusion of signals. We conclude that people already embed new sensory skills in flexible multisensory perception and decision-making after a short training period. A key application of these insights is to the development of sensory augmentation systems that can enhance human perceptual abilities in novel ways. The limitations we reveal (sub-optimality, lack of fusion) provide a foundation for further investigations of the limits of these abilities and their brain basis
The Speed, Precision and Accuracy of Human Multisensory Perception following Changes to the Visual Sense
Human adults can combine information from multiple senses to improve their perceptual judgments. Visual and multisensory experience plays an important role in the development of multisensory integration, however it is unclear to what extent changes in vision impact multisensory processing later in life. In particular, it is not known whether adults account for changes to the relative reliability of their senses, following sensory loss, treatment or training. Using psychophysical methods, this thesis studied the multisensory processing of individuals experiencing changes to the visual sense. Chapters 2 and 3 assessed whether patients implanted with a retinal prosthesis (having been blinded by a retinal degenerative disease) could use this new visual signal with non-visual information to improve their speed or precision on multisensory tasks. Due to large differences between the reliabilities of the visual and non-visual cues, patients were not always able to benefit from the new visual signal. Chapter 4 assessed whether patients with degenerative visual loss adjust the weight given to visual and non-visual cues during audio-visual localization as their relative reliabilities change. Although some patients adjusted their reliance on vision across the visual field in line with predictions based on cue relative reliability, others - patients with visual loss limited to their central visual field only - did not. Chapter 5 assessed whether training with either more reliable or less reliable visual feedback could enable normally sighted adults to overcome an auditory localization bias. Findings suggest that visual information, irrespective of reliability, can be used to overcome at least some non-visual biases. In summary, this thesis documents multisensory changes following changes to the visual sense. The results improve our understanding of adult multisensory plasticity and have implications for successful treatments and rehabilitation following sensory loss
Crossmodal audio and tactile interaction with mobile touchscreens
Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device.
This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective.
A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent.
Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established.
The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems
Sonic Interactions in Virtual Environments
This open access book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments
Sonic interactions in virtual environments
This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments
- …