238 research outputs found

    Using humanoid robots to study human behavior

    Get PDF
    Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans “program” behavior in-or train-each other

    perceiving animacy and arousal in transformed displays of human interaction

    Get PDF
    When viewing a moving abstract stimulus, people tend to attribute social meaning and purpose to the movement. The classic work of Heider and Simmel [1] investigated how observers would describe movement of simple geometric shapes (circle, triangles, and a square) around a screen. A high proportion of participants reported seeing some form of purposeful interaction between the three abstract objects and defining this interaction as a social encounter. Various papers have subsequently found similar results [2,3] and gone on to show that, as Heider and Simmel suggested, the phenomenon was due more to the relationship in space and time of the objects, rather than any particular object characteristic. The research of Tremoulet and Feldman [4] has shown that the percept of animacy may be elicited with a solitary moving object. They asked observers to rate the movement of a single dot or rectangle for whether it was under the influence of an external force, or whether it was in control of its own motion. At mid-trajectory the shape would change speed or direction, or both. They found that shapes that either changed direction greater than 25 degrees from the original trajectory, or changed speed, were judged to be "more alive" than others. Further discussion and evidence of animacy with one or two small dots can be found in Gelman, Durgin and Kaufman [5] Our aim was to further study this phenomenon by using a different method of stimulus production. Previous methods for producing displays of animate objects have relied either on handcrafted stimuli or on parametric variations of simple motion patterns. It is our aim to work towards a new automatic approach by taking actual human movements, transforming them into basic shapes, and exploring what motion properties need to be preserved to obtain animacy. Though the phenomenon of animacy has been shown for many years, using various different displays, very few specific criteria have been set on the essential characteristics of the displays. Part of this research is to try and establish what movements result in percepts of animacy, and in turn, to give further understanding of essential characteristics of human movement and social interaction. In this paper we discuss two experiments in which we examine how different transformations of an original video of a dance influences perception of animacy. We also examine reports of arousal, Experiment 1, and emotional engagement in Experiment 2

    Enheduanna – A Manifesto of Falling: first demonstration of a live brain-computer cinema performance with multi-brain BCI interaction for one performer and two audience members

    Get PDF
    The new commercial-grade Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) have led to a phenomenal development of applications across health, entertainment and the arts, while an increasing interest in multi-brain interaction has emerged. In the arts, there is already a number of works that involve the interaction of more than one participants with the use of EEG-based BCIs. However, the field of live brain-computer cinema and mixed-media performances is rather new, compared to installations and music performances that involve multi-brain BCIs. In this context, we present the particular challenges involved. We discuss Enheduanna – A Manifesto of Falling, the first demonstration of a live brain-computer cinema performance that enables the real-time brain-activity interaction of one performer and two audience members; and we take a cognitive perspective on the implementation of a new passive multi-brain EEG-based BCI system to realise our creative concept. This article also presents the preliminary results and future work

    Perceived motion in structure from motion: Pointing responses to the axis of rotation

    Full text link
    We investigated the ability to match finger orientation to the direction of the axis of rotation in structure-from-motion displays. Preliminary experiments verified that subjects could accurately use the index finger to report direction. The remainder of the experiments studied the perception of the axis of rotation from full rotations of a group of discrete points, the profiles of a rotating ellipsoid, and two views of a group of discrete points. Subjects' responses were analyzed by decomposing the pointing responses into their slant and tilt components. Overall, the results indicated that subjects were sensitive to both slant and tilt. However, when the axis of rotation was near the viewing direction, subjects had difficulty reporting tilt with profiles and two views and showed a large bias in their slant judgments with two views and full rotations. These results are not entirely consistent with theoretical predictions. The results, particularly for two views, suggest that additional constraints are used by humans in the recovery of structure from motion

    A Compact Representation of Drawing Movements with Sequences of Parabolic Primitives

    Get PDF
    Some studies suggest that complex arm movements in humans and monkeys may optimize several objective functions, while others claim that arm movements satisfy geometric constraints and are composed of elementary components. However, the ability to unify different constraints has remained an open question. The criterion for a maximally smooth (minimizing jerk) motion is satisfied for parabolic trajectories having constant equi-affine speed, which thus comply with the geometric constraint known as the two-thirds power law. Here we empirically test the hypothesis that parabolic segments provide a compact representation of spontaneous drawing movements. Monkey scribblings performed during a period of practice were recorded. Practiced hand paths could be approximated well by relatively long parabolic segments. Following practice, the orientations and spatial locations of the fitted parabolic segments could be drawn from only 2–4 clusters, and there was less discrepancy between the fitted parabolic segments and the executed paths. This enabled us to show that well-practiced spontaneous scribbling movements can be represented as sequences (“words”) of a small number of elementary parabolic primitives (“letters”). A movement primitive can be defined as a movement entity that cannot be intentionally stopped before its completion. We found that in a well-trained monkey a movement was usually decelerated after receiving a reward, but it stopped only after the completion of a sequence composed of several parabolic segments. Piece-wise parabolic segments can be generated by applying affine geometric transformations to a single parabolic template. Thus, complex movements might be constructed by applying sequences of suitable geometric transformations to a few templates. Our findings therefore suggest that the motor system aims at achieving more parsimonious internal representations through practice, that parabolas serve as geometric primitives and that non-Euclidean variables are employed in internal movement representations (due to the special role of parabolas in equi-affine geometry)

    The CNS Stochastically Selects Motor Plan Utilizing Extrinsic and Intrinsic Representations

    Get PDF
    Traditionally motor studies have assumed that motor tasks are executed according to a single plan characterized by regular patterns, which corresponds to the minimum of a cost function in extrinsic or intrinsic coordinates. However, the novel via-point task examined in this paper shows distinct planning and execution stages in motion production and demonstrates that subjects randomly select from several available motor plans to perform a task. Examination of the effect of pre-training and via-point orientation on subject behavior reveals that the selection of a plan depends on previous movements and is affected by constraints both intrinsic and extrinsic of the body. These results provide new insights into the hierarchical structure of motion planning in humans, which can only be explained if the current models of motor control integrate an explicit plan selection stage

    Four-Day-Old Human Neonates Look Longer at Non-Biological Motions of a Single Point-of-Light

    Get PDF
    BACKGROUND: Biological motions, that is, the movements of humans and other vertebrates, are characterized by dynamic regularities that reflect the structure and the control schemes of the musculo-skeletal system. Early studies on the development of the visual perception of biological motion showed that infants after three months of age distinguished between biological and non-biological locomotion. METHODOLOGY/PRINCIPAL FINDINGS: Using single point-light motions that varied with respect to the “two-third-power law” of motion generation and perception, we observed that four-day-old human neonates looked longer at non-biological motions than at biological motions when these were simultaneously presented in a standard preferential looking paradigm. CONCLUSION/SIGNIFICANCE: This result can be interpreted within the “violation of expectation” framework and can indicate that neonates' motion perception — like adults'—is attuned to biological kinematics

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Variation in the Meaning of Alarm Calls in Verreaux’s and Coquerel’s Sifakas (Propithecus verreauxi, P. coquereli)

    Get PDF
    The comprehension and usage of primate alarm calls appear to be influenced by social learning. Thus, alarm calls provide flexible behavioral mechanisms that may allow animals to develop appropriate responses to locally present predators. To study this potential flexibility, we compared the usage and function of 3 alarm calls common to 2 closely related sifaka species (Propithecus verreauxi and P. coquereli), in each of 2 different populations with different sets of predators. Playback studies revealed that both species in both of their respective populations emitted roaring barks in response to raptors, and playbacks of this call elicited a specific anti-raptor response (look up and climb down). However, in Verreaux’s sifakas, tchi-faks elicited anti-terrestrial predator responses (look down, climb up) in the population with a higher potential predation threat by terrestrial predators, whereas tchi-faks in the other population were associated with nonspecific flight responses. In both populations of Coquerel’s sifakas, tchi-fak playbacks elicited anti-terrestrial predator responses. More strikingly, Verreaux’s sifakas exhibited anti-terrestrial predator responses after playbacks of growls in the population with a higher threat of predation by terrestrial predators, whereas Coquerel’s sifakas in the raptor-dominated habitat seemed to associate growls with a threat by raptors; the 2 other populations of each species associated a mild disturbance with growls. We interpret this differential comprehension and usage of alarm calls as the result of social learning processes that caused changes in signal content in response to changes in the set of predators to which these populations have been exposed since they last shared a common ancestor

    Movement Timing and Invariance Arise from Several Geometries

    Get PDF
    Human movements show several prominent features; movement duration is nearly independent of movement size (the isochrony principle), instantaneous speed depends on movement curvature (captured by the 2/3 power law), and complex movements are composed of simpler elements (movement compositionality). No existing theory can successfully account for all of these features, and the nature of the underlying motion primitives is still unknown. Also unknown is how the brain selects movement duration. Here we present a new theory of movement timing based on geometrical invariance. We propose that movement duration and compositionality arise from cooperation among Euclidian, equi-affine and full affine geometries. Each geometry posses a canonical measure of distance along curves, an invariant arc-length parameter. We suggest that for continuous movements, the actual movement duration reflects a particular tensorial mixture of these canonical parameters. Near geometrical singularities, specific combinations are selected to compensate for time expansion or compression in individual parameters. The theory was mathematically formulated using Cartan's moving frame method. Its predictions were tested on three data sets: drawings of elliptical curves, locomotion and drawing trajectories of complex figural forms (cloverleaves, lemniscates and limaçons, with varying ratios between the sizes of the large versus the small loops). Our theory accounted well for the kinematic and temporal features of these movements, in most cases better than the constrained Minimum Jerk model, even when taking into account the number of estimated free parameters. During both drawing and locomotion equi-affine geometry was the most dominant geometry, with affine geometry second most important during drawing; Euclidian geometry was second most important during locomotion. We further discuss the implications of this theory: the origin of the dominance of equi-affine geometry, the possibility that the brain uses different mixtures of these geometries to encode movement duration and speed, and the ontogeny of such representations
    • …
    corecore