13 research outputs found

    Effects of Delayed Visual Feedback on Grooved Pegboard Test Performance

    Get PDF
    Using four experiments, this study investigates what amount of delay brings about maximal impairment under delayed visual feedback and whether a critical interval, such as that in audition, also exists in vision. The first experiment measured the Grooved Pegboard test performance as a function of visual feedback delays from 120 to 2120 ms in 16 steps. Performance sharply decreased until about 490 ms, then more gradually until 2120 ms, suggesting that two mechanisms were operating under delayed visual feedback. Since delayed visual feedback differs from delayed auditory feedback in that the former induces not only temporal but also spatial displacements between motor and sensory feedback, this difference could also exist in the mechanism responsible for spatial displacement. The second experiment was hence conducted to provide simultaneous haptic feedback together with delayed visual feedback to inform correct spatial position. The disruption was significantly ameliorated when information about spatial position was provided from a haptic source. The sharp decrease in performance of up to approximately 300 ms was followed by an almost flat performance. This is similar to the critical interval found in audition. Accordingly, the mechanism that caused the sharp decrease in performance in experiments 1 and 2 was probably mainly responsible for temporal disparity and is common across different modality–motor combinations, while the other mechanism that caused a rather gradual decrease in performance in experiment 1 was mainly responsible for spatial displacement. In experiments 3 and 4, the reliability of spatial information from the haptic source was reduced by wearing a glove or using a tool. When the reliability of spatial information was reduced, the data lay between those of experiments 1 and 2, and that a gradual decrease in performance partially reappeared. These results further support the notion that two mechanisms operate under delayed visual feedback

    Audio-Visual Speech Timing Sensitivity Is Enhanced in Cluttered Conditions

    Get PDF
    Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room

    Sensory Attribute Identification Time Cannot Explain the Common Temporal Limit of Binding Different Attributes and Modalities

    No full text
    An informative performance measure of the brain's integration across different sensory attributes/modalities is the critical temporal rate of feature alternation (between, eg, red and green) beyond which observers could not identify the feature value specified by a timing signal from another attribute (eg, a pitch change). Interestingly, this limit, which we called the critical crowding frequency (CCF), is fairly low and nearly constant (∼2.5 Hz) regardless of the combination of attributes and modalities (Fujisaki & Nishida, 2010, IMRF). One may consider that the CCF reflects the processing time required for the brain to identify the specified feature value on the fly. According to this idea, the similarity in CCF could be ascribed to the similarity in identification time for the attributes we used (luminance, color, orientation, pitch, vibration). To test this idea, we estimated the identification time of each attribute from [Go/ No-Go choice reaction time – simple reaction time]. In disagreement with the prediction, we found significant differences among attributes (eg, ∼160 ms for orientation, ∼70 ms for pitch). The results are more consistent with our proposal (Fujisaki & Nishida, Proc Roy Soc B) that the CCF reflects the common rate limit of specifying what happens when (timing-content binding) by a central, presumably postdictive, mechanism

    A common perceptual temporal limit of binding synchronous inputs across different sensory attributes and modalities

    No full text
    The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information (‘what’) across different channels, temporal coincidence (‘when’) along with spatial coincidence (‘where’) provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2–3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4–9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed ‘what’ and ‘when’ properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in ‘when’ processing, the binding performance reflects the central temporal limit of integrating ‘when’ and ‘what’ properties

    The effect of a crunchy pseudo-chewing sound on perceived texture of softened foods

    Get PDF
    AbstractElderly individuals whose ability to chew and swallow has declined are often restricted to unpleasant diets of very soft food, leading to a poor appetite. To address this problem, we aimed to investigate the influence of altered auditory input of chewing sounds on the perception of food texture. The modified chewing sound was reported to influence the perception of food texture in normal foods. We investigated whether the perceived sensations of nursing care foods could be altered by providing altered auditory feedback of chewing sounds, even if the actual food texture is dull. Chewing sounds were generated using electromyogram (EMG) of the masseter. When the frequency properties of the EMG signal are modified and it is heard as a sound, it resembles a “crunchy” sound, much like that emitted by chewing, for example, root vegetables (EMG chewing sound). Thirty healthy adults took part in the experiment. In two conditions (with/without the EMG chewing sound), participants rated the taste, texture and evoked feelings of five kinds of nursing care foods using two questionnaires. When the “crunchy” EMG chewing sound was present, participants were more likely to evaluate food as having the property of stiffness. Moreover, foods were perceived as rougher and to have a greater number of ingredients in the condition with the EMG chewing sound, and satisfaction and pleasantness were also greater. In conclusion, the “crunchy” pseudo-chewing sound could influence the perception of food texture, even if the actual “crunchy” oral sensation is lacking. Considering the effect of altered auditory feedback while chewing, we can suppose that such a tool would be a useful technique to help people on texture-modified diets to enjoy their food

    Recalibration of audiovisual simultaneity

    No full text
    To perceive the auditory and visual aspects of a physical event as occurring simultaneously, the brain must adjust for differences between the two modalities in both physical transmission time and sensory processing time. One possible strategy to overcome this difficulty is to adaptively recalibrate the simultaneity point from daily experience of audiovisual events. Here we report that after exposure to a fixed audiovisual time lag for several minutes, human participants showed shifts in their subjective simultaneity responses toward that particular lag. This 'lag adaptation' also altered the temporal tuning of an auditory-induced visual illusion, suggesting that adaptation occurred via changes in sensory processing, rather than as a result of a cognitive shift while making task responses. Our findings suggest that the brain attempts to adjust subjective simultaneity across different modalities by detecting and reducing time lags between inputs that likely arise from the same physical events

    Visual search for a target changing in synchrony with an auditory signal

    No full text
    We examined whether the detection of audio-visual temporal synchrony is determined by a pre-attentive parallel process, or by an attentive serial process using a visual search paradigm. We found that detection of a visual target that changed in synchrony with an auditory stimulus was gradually impaired as the number of unsynchronized visual distractors increased (experiment 1), whereas synchrony discrimination of an attended target in a pre-cued location was unaffected by the presence of distractors (experiment 2). The effect of distractors cannot be ascribed to reduced target visibility nor can the increase in false alarm rates be predicted by a noisy parallel processing model. Reaction times for target detection increased linearly with number of distractors, with the slope being about twice as steep for target-absent trials as for target-present trials (experiment 3). Similar results were obtained regardless of whether the audio-visual stimulus consisted of visual flashes synchronized with amplitude-modulated pips, or of visual rotations synchronized with frequency-modulated up-down sweeps. All of the results indicate that audio-visual perceptual synchrony is judged by a serial process and are consistent with the suggestion that audio-visual temporal synchrony is detected by a 'mid-level' feature matching process. © 2005 The Royal Society
    corecore