35 research outputs found

    A Turing-Like Handshake Test for Motor Intelligence

    Full text link
    Abstract. In the Turing test, a computer model is deemed to “think intelligently ” if it can generate answers that are not distinguishable from those of a human. This test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, with the human hand movement being a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human, artificial, or a linear combination of the two). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a forced-choice method and ask which of two systems is more humanlike. By comparing a given model with a weighted sum of human and artificial systems, we fit a psychometric curve to the answers of the interrogator and extract a quantitative measure for the computer model in terms of similarity to the human handshake

    Non-monotonicity on a spatio-temporally defined cyclic task: evidence of two movement types?

    Get PDF
    We tested 23 healthy participants who performed rhythmic horizontal movements of the elbow. The required amplitude and frequency ranges of the movements were specified to the participants using a closed shape on a phase-plane display, showing angular velocity versus angular position, such that participants had to continuously control both the speed and the displacement of their forearm. We found that the combined accuracy in velocity and position throughout the movement was not a monotonic function of movement speed. Our findings suggest that specific combinations of required movement frequency and amplitude give rise to two distinct types of movements: one of a more rhythmic nature, and the other of a more discrete nature

    Learning new sensorimotor contingencies:Effects of long-term use of sensory augmentation on the brain and conscious perception

    Get PDF
    Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation

    Aging and Sensory Substitution in a Virtual Navigation Task

    No full text
    International audienceVirtual environments are becoming ubiquitous, and used in a variety of contexts–from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation

    Until a full citation is available, please cite as follows:

    No full text
    between movement types: indication of predictive control

    Path length.

    No full text
    <p>Top row: main effects; bottom row: interaction effects (ns). Significant effects marked with an asterisk.</p
    corecore