267 research outputs found

    Motor simulation without motor expertise: enhanced corticospinal excitability in visually experienced dance spectators

    Get PDF
    The human β€œmirror-system” is suggested to play a crucial role in action observation and execution, and is characterized by activity in the premotor and parietal cortices during the passive observation of movements. The previous motor experience of the observer has been shown to enhance the activity in this network. Yet visual experience could also have a determinant influence when watching more complex actions, as in dance performances. Here we tested the impact visual experience has on motor simulation when watching dance, by measuring changes in corticospinal excitability. We also tested the effects of empathic abilities. To fully match the participants' long-term visual experience with the present experimental setting, we used three live solo dance performances: ballet, Indian dance, and non-dance. Participants were either frequent dance spectators of ballet or Indian dance, or β€œnovices” who never watched dance. None of the spectators had been physically trained in these dance styles. Transcranial magnetic stimulation was used to measure corticospinal excitability by means of motor-evoked potentials (MEPs) in both the hand and the arm, because the hand is specifically used in Indian dance and the arm is frequently engaged in ballet dance movements. We observed that frequent ballet spectators showed larger MEP amplitudes in the arm muscles when watching ballet compared to when they watched other performances. We also found that the higher Indian dance spectators scored on the fantasy subscale of the Interpersonal Reactivity Index, the larger their MEPs were in the arms when watching Indian dance. Our results show that even without physical training, corticospinal excitability can be enhanced as a function of either visual experience or the tendency to imaginatively transpose oneself into fictional characters. We suggest that spectators covertly simulate the movements for which they have acquired visual experience, and that empathic abilities heighten motor resonance during dance observation

    A Psychophysical Investigation of Differences between Synchrony and Temporal Order Judgments.

    Get PDF
    Synchrony judgments involve deciding whether cues to an event are in synch or out of synch, while temporal order judgments involve deciding which of the cues came first. When the cues come from different sensory modalities these judgments can be used to investigate multisensory integration in the temporal domain. However, evidence indicates that that these two tasks should not be used interchangeably as it is unlikely that they measure the same perceptual mechanism. The current experiment further explores this issue across a variety of different audiovisual stimulus types

    Using humanoid robots to study human behavior

    Get PDF
    Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans β€œprogram” behavior in-or train-each other

    perceiving animacy and arousal in transformed displays of human interaction

    Get PDF
    When viewing a moving abstract stimulus, people tend to attribute social meaning and purpose to the movement. The classic work of Heider and Simmel [1] investigated how observers would describe movement of simple geometric shapes (circle, triangles, and a square) around a screen. A high proportion of participants reported seeing some form of purposeful interaction between the three abstract objects and defining this interaction as a social encounter. Various papers have subsequently found similar results [2,3] and gone on to show that, as Heider and Simmel suggested, the phenomenon was due more to the relationship in space and time of the objects, rather than any particular object characteristic. The research of Tremoulet and Feldman [4] has shown that the percept of animacy may be elicited with a solitary moving object. They asked observers to rate the movement of a single dot or rectangle for whether it was under the influence of an external force, or whether it was in control of its own motion. At mid-trajectory the shape would change speed or direction, or both. They found that shapes that either changed direction greater than 25 degrees from the original trajectory, or changed speed, were judged to be "more alive" than others. Further discussion and evidence of animacy with one or two small dots can be found in Gelman, Durgin and Kaufman [5] Our aim was to further study this phenomenon by using a different method of stimulus production. Previous methods for producing displays of animate objects have relied either on handcrafted stimuli or on parametric variations of simple motion patterns. It is our aim to work towards a new automatic approach by taking actual human movements, transforming them into basic shapes, and exploring what motion properties need to be preserved to obtain animacy. Though the phenomenon of animacy has been shown for many years, using various different displays, very few specific criteria have been set on the essential characteristics of the displays. Part of this research is to try and establish what movements result in percepts of animacy, and in turn, to give further understanding of essential characteristics of human movement and social interaction. In this paper we discuss two experiments in which we examine how different transformations of an original video of a dance influences perception of animacy. We also examine reports of arousal, Experiment 1, and emotional engagement in Experiment 2

    Thermal in-car interaction for navigation

    Get PDF
    In this demonstration we show a thermal interaction design on the steering wheel for navigational cues in a car. Participants will be able to use a thermally enhanced steering wheel to follow instructions given in a turn-to-turn based navigation task in a virtual city. The thermal cues will be provided on both sides of the steering wheel and will indicate the turning direction by warming the corresponding side, while the opposite side is being cooled

    Event-related alpha suppression in response to facial motion

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors. Β© 2014 Girges et al

    Perceived motion in structure from motion: Pointing responses to the axis of rotation

    Full text link
    We investigated the ability to match finger orientation to the direction of the axis of rotation in structure-from-motion displays. Preliminary experiments verified that subjects could accurately use the index finger to report direction. The remainder of the experiments studied the perception of the axis of rotation from full rotations of a group of discrete points, the profiles of a rotating ellipsoid, and two views of a group of discrete points. Subjects' responses were analyzed by decomposing the pointing responses into their slant and tilt components. Overall, the results indicated that subjects were sensitive to both slant and tilt. However, when the axis of rotation was near the viewing direction, subjects had difficulty reporting tilt with profiles and two views and showed a large bias in their slant judgments with two views and full rotations. These results are not entirely consistent with theoretical predictions. The results, particularly for two views, suggest that additional constraints are used by humans in the recovery of structure from motion

    A Compact Representation of Drawing Movements with Sequences of Parabolic Primitives

    Get PDF
    Some studies suggest that complex arm movements in humans and monkeys may optimize several objective functions, while others claim that arm movements satisfy geometric constraints and are composed of elementary components. However, the ability to unify different constraints has remained an open question. The criterion for a maximally smooth (minimizing jerk) motion is satisfied for parabolic trajectories having constant equi-affine speed, which thus comply with the geometric constraint known as the two-thirds power law. Here we empirically test the hypothesis that parabolic segments provide a compact representation of spontaneous drawing movements. Monkey scribblings performed during a period of practice were recorded. Practiced hand paths could be approximated well by relatively long parabolic segments. Following practice, the orientations and spatial locations of the fitted parabolic segments could be drawn from only 2–4 clusters, and there was less discrepancy between the fitted parabolic segments and the executed paths. This enabled us to show that well-practiced spontaneous scribbling movements can be represented as sequences (β€œwords”) of a small number of elementary parabolic primitives (β€œletters”). A movement primitive can be defined as a movement entity that cannot be intentionally stopped before its completion. We found that in a well-trained monkey a movement was usually decelerated after receiving a reward, but it stopped only after the completion of a sequence composed of several parabolic segments. Piece-wise parabolic segments can be generated by applying affine geometric transformations to a single parabolic template. Thus, complex movements might be constructed by applying sequences of suitable geometric transformations to a few templates. Our findings therefore suggest that the motor system aims at achieving more parsimonious internal representations through practice, that parabolas serve as geometric primitives and that non-Euclidean variables are employed in internal movement representations (due to the special role of parabolas in equi-affine geometry)

    The CNS Stochastically Selects Motor Plan Utilizing Extrinsic and Intrinsic Representations

    Get PDF
    Traditionally motor studies have assumed that motor tasks are executed according to a single plan characterized by regular patterns, which corresponds to the minimum of a cost function in extrinsic or intrinsic coordinates. However, the novel via-point task examined in this paper shows distinct planning and execution stages in motion production and demonstrates that subjects randomly select from several available motor plans to perform a task. Examination of the effect of pre-training and via-point orientation on subject behavior reveals that the selection of a plan depends on previous movements and is affected by constraints both intrinsic and extrinsic of the body. These results provide new insights into the hierarchical structure of motion planning in humans, which can only be explained if the current models of motor control integrate an explicit plan selection stage

    Automatic Recognition of Affective Body Movement in a Video Game Scenario

    Full text link
    This study aims at recognizing the affective states of players from non-acted, non-repeated body movements in the context of a video game scenario. A motion capture system was used to collect the movements of the participants while playing a Nintendo Wii tennis game. Then, a combination of body movement features along with a machine learning technique was used in order to automatically recognize emotional states from body movements. Our system was then tested for its ability to generalize to new participants and to new body motion data using a sub-sampling validation technique. To train and evaluate our system, online evaluation surveys were created using the body movements collected from the motion capture system and human observers were recruited to classify them into affective categories. The results showed that observer agreement levels are above chance level and the automatic recognition system achieved recognition rates comparable to the observers' benchmark. Β© 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering
    • …
    corecore