17,718 research outputs found
Recommended from our members
Spatial consequences of bridging the saccadic gap
We report six experiments suggesting that conscious perception is actively redrafted to take account of events both before and after the event that is reported. When observers saccade to a stationary object they overestimate its duration, as if the brain were filling in the saccadic gap with the post-saccadic image. We first demonstrate that this illusion holds for moving objects, implying that the perception of time, velocity, and distance traveled become discrepant. We then show that this discrepancy is partially resolved up to 500 ms after a saccade: the perceived offset position of a post-saccadic moving stimulus shows a greater forward mislocalization when pursued after a saccade than during pursuit alone. These data are consistent with the idea that the temporal bias is resolved by the subsequent spatial adjustment to provide a percept that is coherent in its gist but inconsistent in its detail
Perception Of Visual Speed While Moving
During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone (passive transport), and both biomechanical self-motion and physical translation together (walking). Their results show that each factor alone produces subtractive reductions in visual speed but that subtraction is greatest with both factors together, approximating the sum of the 2 separately. The similarity of results for biomechanical and passive self-motion support H. B. Barlow\u27s (1990) inhibition theory of sensory correlation as a mechanism for implementing H. Wallach\u27s (1987) compensation for self-motion. (PsycINFO Database Record (c) 2013 APA, all rights reserved)(journal abstract
Perceptual Calibration of F0 Production: Evidence from Feedback Perturbation
Hearing one’s own speech is important for language learning and maintenance of accurate articulation. For example, people with postlinguistically acquired deafness often show a gradual deterioration of many aspects of speech production. In this manuscript, data are presented that address the role played by acoustic feedback in the control of voice fundamental frequency (F0). Eighteen subjects produced vowels under a control ~normal F0 feedback! and two experimental conditions: F0 shifted up and F0 shifted down. In each experimental condition subjects produced vowels during a training period in which their F0 was slowly shifted without their awareness. Following this exposure to transformed F0, their acoustic feedback was returned to normal. Two effects were observed. Subjects compensated for the change in F0 and showed negative aftereffects. When F0 feedback was returned to normal, the subjects modified their produced F0 in the opposite direction to the shift. The results suggest that fundamental frequency is controlled using auditory feedback and with reference to an internal pitch representation. This is consistent with current work on internal models of speech motor control
Learning Task Constraints from Demonstration for Hybrid Force/Position Control
We present a novel method for learning hybrid force/position control from
demonstration. We learn a dynamic constraint frame aligned to the direction of
desired force using Cartesian Dynamic Movement Primitives. In contrast to
approaches that utilize a fixed constraint frame, our approach easily
accommodates tasks with rapidly changing task constraints over time. We
activate only one degree of freedom for force control at any given time,
ensuring motion is always possible orthogonal to the direction of desired
force. Since we utilize demonstrated forces to learn the constraint frame, we
are able to compensate for forces not detected by methods that learn only from
the demonstrated kinematic motion, such as frictional forces between the
end-effector and the contact surface. We additionally propose novel extensions
to the Dynamic Movement Primitive (DMP) framework that encourage robust
transition from free-space motion to in-contact motion in spite of environment
uncertainty. We incorporate force feedback and a dynamically shifting goal to
reduce forces applied to the environment and retain stable contact while
enabling force control. Our methods exhibit low impact forces on contact and
low steady-state tracking error.Comment: Under revie
Perceptual adjustment to time-compressed Speech: a cross-linguistic study
revious research has shown that, when hearers listen to artificially speeded speech, their performance improves over the course of 10-15 sentences, as if their perceptual system was "adapting" to these fast rates of speech. In this paper, we further investigate the mechanisms that are responsible for such effects. In Experiment 1, we report that, for bilingual speakers of Catalan and Spanish, exposure to
compressed sentences in either language improves performance on sentences in the other language. Experiment 2 reports that Catalan/Spanish transfer of performance occurs even in monolingual speakers of Spanish who do not understand Catalan. In Experiment 3, we study another pair of languages--namely, English and French--and report no transfer of adaptation between these two languages for English-French bilinguals. Experiment 4, with monolingual English speakers, assesses transfer of adaptation from French, Dutch, and English toward English. Here we find that there is no adaptation from French and intermediate adaptation from Dutch. We discuss the locus of the adaptation to compressed speech and relate our findings to other cross-linguistic studies in speech perception
Learning and adaptation in speech production without a vocal tract
How is the complex audiomotor skill of speaking learned? To what extent does it depend on the specific characteristics of the vocal tract? Here, we developed a touchscreen-based speech synthesizer to examine learning of speech production independent of the vocal tract. Participants were trained to reproduce heard vowel targets by reaching to locations on the screen without visual feedback and receiving endpoint vowel sound auditory feedback that depended continuously on touch location. Participants demonstrated learning as evidenced by rapid increases in accuracy and consistency in the production of trained targets. This learning generalized to productions of novel vowel targets. Subsequent to learning, sensorimotor adaptation was observed in response to changes in the location-sound mapping. These findings suggest that participants learned adaptable sensorimotor maps allowing them to produce desired vowel sounds. These results have broad implications for understanding the acquisition of speech motor control.Published versio
- …