177 research outputs found

    Selective Tactile Attention Under Auditory Perceptual Load

    Get PDF
    Previous research has demonstrated that detection of certain stimuli can be increased or decreased by a manipulation of attentional load during a target task. While much of this research focuses on sensory attention, there is some debate regarding whether the effect can be seen across sensory modalities (Kahneman, 1973), or only within the same sensory modality (Wickens, 1980). Additionally, it is unclear whether the effect can be applied to audition. The purpose of the current study was to determine whether irrelevant tactile distractors would be ignored or detected under various levels of auditory stimulation (‘load’). It was predicted that vibrotactile distractors would be more frequently processed under low auditory load conditions, and more frequently missed under high auditory load conditions. There was a total of 80 participants (58 female). Following Fairnie et al. (2016), auditory load took the form of an auditory search task which utilized various animal sounds. Load was manipulated in three conditions (high, low, control) through the number of animal sounds involved in the task. Simultaneously, participants were asked to report on any detection of concurrent vibrotactile stimuli (Murphy & Dalton, 2016). A one-way ANCOVA revealed that participants were more likely to miss tactile stimuli under high or low load than in the control condition. This was in partial support of the study’s hypothesis, which had predicted that increased auditory load would cause a decrease in identification of tactile stimuli. Implications of these results are discussed

    Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search

    Get PDF
    Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target's orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing `contextual cueing'. This effect was enhanced in the multisensory session---importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift--diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    How much spatial information is lost in the sensory substitution process? Comparing visual, tactile, and auditory approaches

    Get PDF
    Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study novice users discriminated the location of two objects at 1.2m using devices that transformed a 16x 8 depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2cm, 8cm, and 29cm using these visual, auditory, and haptic SSDs respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g. 16 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this ‘modality gap’ found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g. visuospatial into visual)

    Little engagement of attention by salient distractors defined in a different dimension or modality to the visual search target

    Get PDF
    Singleton distractors may inadvertently capture attention, interfering with the task at hand. The underlying neural mechanisms of how we prevent or handle distractor interference remain elusive. Here, we varied the type of salient distractor introduced in a visual search task: the distractor could be defined in the same (shape) dimension as the target, a different (color) dimension, or a different (tactile) modality (intra-dimensional, cross-dimensional, and, respectively, cross-modal distractor, all matched for physical salience); and besides behavioral interference, we measured lateralized electrophysiological indicators of attentional selectivity (the N2pc, Ppc, PD, CCN/CCP, CDA, and cCDA). The results revealed the intra-dimensional distractor to produce the strongest reaction-time interference, associated with the smallest target-elicited N2pc. In contrast, the cross-dimensional and cross-modal distractors did not engender any significant interference, and the target-elicited N2pc was comparable to the condition in which the search display contained only the target singleton, thus ruling out early attentional capture. Moreover, the cross-modal distractor elicited a significant early CCN/CCP, but did not influence the target-elicited N2pc, suggesting that the tactile distractor is registered by the somatosensory system (rather than being proactively suppressed), without, however, engaging attention. Together, our findings indicate that, in contrast to distractors defined in the same dimension as the target, distractors singled out in a different dimension or modality can be effectively prevented to engage attention, consistent with dimension- or modality-weighting accounts of attentional priority computation

    Audiotactile interactions in temporal perception

    Full text link

    Tactile discrimination of material properties: application to virtual buttons for professional appliances

    Get PDF
    An experiment is described that tested the possibility to classify wooden, plastic, and metallic objects based on reproduced auditory and vibrotactile stimuli. The results show that recognition rates are considerably above chance level with either unimodal auditory or vibrotactile feedback. Supported by those findings, the possibility to render virtual buttons for professional appliances with different tactile properties was tested. To this end, a touchscreen device was provided with various types of vibrotactile feedback in response to the sensed pressing force and location of a finger. Different virtual buttons designs were tested by user panels who performed a subjective evaluation on perceived tactile properties and materials. In a first implementation, virtual buttons were designed reproducing the vibration recordings of real materials used in the classification experiment: mainly due to hardware limitations of our prototype and the consequent impossibility to render complex vibratory signals, this approach did not prove successful. A second implementation was then optimized for the device capabilities, moreover introducing surface compliance effects and button release cues: the new design led to generally high quality ratings, clear discrimination of different buttons and unambiguous material classification. The lesson learned was that various material and physical properties of virtual buttons can be successfully rendered by characteristic frequency and decay cues if correctly reproduced by the device

    Two spatially distinct posterior alpha sources fulfill different functional roles in attention

    Get PDF
    Directing attention helps extracting relevant information and suppressing distracters. Alpha brain oscillations (8-12Hz) are crucial for this process, with power decreases facilitating processing of important information and power increases inhibiting brain regions processing irrelevant information. Evidence for this phenomenon arises from visual attention studies (Worden et al., 2000b), however, the effect also exists in other modalities, including the somatosensory system (Haegens et al., 2011) and inter-sensory attention tasks (Foxe and Snyder, 2011). We investigated in human participants (10 females, 10 males) the role of alpha oscillations in focused (0/100%) vs. divided (40/60%) attention, both across modalities (visual/somatosensory; Experiment 1) and within the same modality (visual domain: across hemifields; Experiment 2) while recording EEG over 128 scalp electrodes. In Experiment 1 participants divided their attention between visual and somatosensory modality to determine the temporal/spatial frequency of a target stimulus (vibrotactile stimulus/Gabor grating). In Experiment 2, participants divided attention between two visual hemifields to identify the orientation of a Gabor grating. In both experiments, pre-stimulus alpha power in visual areas decreased linearly with increasing attention to visual stimuli. In contrast, pre-stimulus alpha power in parietal areas was lower when attention was divided between modalities/hemifields, compared to focused attention. These results suggest there are two alpha sources, where one reflects the ‘visual spotlight of attention’ and the other reflects attentional effort. To our knowledge, this is the first study to show that attention recruits two spatially distinct alpha sources in occipital and parietal brain regions, acting simultaneously but serving different functions in attention

    Multi-sensory working memory - in vision, audition and touch

    Get PDF
    Our nervous systems can perform a vast variety of cognitive tasks, many involving several different senses. Although sensory systems provide a basis for the creation of mental representations, we rely on memory to form mental representations of information that is no longer present in our external world. Focussing on the initial stage of this process, working memory (WM), where information is retained actively over a short time course, experiments included in this thesis were directed toward understanding the nature of sensory representations across the senses (vision, audition and touch). Instead of quantifying how many items one can hold in each sensory modality (all-or-none representations), new response methods were devised to capture the qualitative nature of sensory representations. Measuring quality rather than quantity of information held in WM, has led to the re-evaluation of the nature of its underlying capacity limits. Rather than assuming that WM capacity is limited to a fixed number of items, it may be more suitable to describe WM as a resource which can be shared and flexibly distributed across sensory information. Thus it has been proposed that at low loads we can hold information at a high resolution. However, as soon as memory load is increased, there is a deterioration of the quality at which each individual item can be represented in WM. The resource model of WM has been applied to describe processes of visual WM, but has not been investigated for other sensory modalities. In the first part of my thesis I demonstrate behaviourally that the resource model can be extended to account for processes in auditory WM, associated with the storage of sound frequency (pitch, chapter 2) and speech sounds (phonemes, chapter 3). I then show that it can also be extended to account for storage of tactile vibrational frequencies (chapter 4). Overall, the results suggest that memory representations become noisier with an increase in information load, consistent with the concept that representations are coded as distributed patterns. A pattern may code for individual object features or entire objects. As studies in chapter 2 - 4 only looked at a single type of feature each in separation, I next examined WM information storage for auditory objects, composed of multiple features (chapter 5). Object formation involves binding of features, which become reorganized to create more complex unified representations of previously distributed information. The results revealed a clear feature extraction cost when recall was tested on individual features rather than on integrated objects. One interpretation of these findings is that, at some level in the auditory system, sounds may be stored as integrated objects. In a final study, using fMRI with MVPA (mulitvoxel pattern analysis), memory traces represented as distributed patterns of brain activity were decoded from different regions of the auditory system (chapter 6). The major goal was to resolve the debate on the role of early sensory cortices in cognition: are they primarily involved in the perception of low-level stimulus features or also in maintenance of the same features in memory? I demonstrate that perception and memory share common neural substrates, where early auditory cortex serves as a substrate to accommodate both processes. Overall, in this thesis memory representations were characterized across the senses in three different ways: (1) measuring them in terms of their quality or resolution, (2) testing whether the preferred format is on the feature or integrated object level; and (3) as patterns of brain activity. Findings converge along the concept that noisy representations actively held in WM are coded as distributed patterns in the brain
    • 

    corecore