50 research outputs found

    Implicit social associations for geometric-shape agents more strongly influenced by visual form than by explicitly identified social actions

    Get PDF
    Studies of infants' and adults' social cognition frequently use geometric-shape agents such as coloured squares and circles, but the influence of agent visual-form on social cognition has been little investigated. Here, although adults gave accurate explicit descriptions of interactions between geometric-shape aggressors and victims, implicit association tests for dominance and valence did not detect tendencies to encode the shapes’ social attributes on an implicit level. With regard to valence, the lack of any systematic implicit associations precludes conclusive interpretations. With regard to dominance, participants implicitly associated a yellow square as more dominant than a blue circle, even when the true relationship was the reverse of this and was correctly explicitly described by participants. Therefore, although explicit dominance judgements were strongly influenced by observed behaviour, implicit dominance associations were more clearly influenced by preconceived associations between visual form and social characteristics. This study represents a cautionary tale for those conducting experiments using geometric-shape agents

    Effect of Pictorial Depth Cues, Binocular Disparity Cues and Motion Parallax Depth Cues on Lightness Perception in Three-Dimensional Virtual Scenes

    Get PDF
    Surface lightness perception is affected by scene interpretation. There is some experimental evidence that perceived lightness under bi-ocular viewing conditions is different from perceived lightness in actual scenes but there are also reports that viewing conditions have little or no effect on perceived color. We investigated how mixes of depth cues affect perception of lightness in three-dimensional rendered scenes containing strong gradients of illumination in depth.Observers viewed a virtual room (4 m width x 5 m height x 17.5 m depth) with checkerboard walls and floor. In four conditions, the room was presented with or without binocular disparity (BD) depth cues and with or without motion parallax (MP) depth cues. In all conditions, observers were asked to adjust the luminance of a comparison surface to match the lightness of test surfaces placed at seven different depths (8.5-17.5 m) in the scene. We estimated lightness versus depth profiles in all four depth cue conditions. Even when observers had only pictorial depth cues (no MP, no BD), they partially but significantly discounted the illumination gradient in judging lightness. Adding either MP or BD led to significantly greater discounting and both cues together produced the greatest discounting. The effects of MP and BD were approximately additive. BD had greater influence at near distances than far.These results suggest the surface lightness perception is modulated by three-dimensional perception/interpretation using pictorial, binocular-disparity, and motion-parallax cues additively. We propose a two-stage (2D and 3D) processing model for lightness perception

    Effects of self-avatar cast shadow and foot vibration on telepresence, virtual walking experience, and cybersickness from omnidirectional movie

    No full text
    Human locomotion is most naturally achieved through walking, which is good for both mental and physical health. To provide a virtual walking experience to seated users, a system utilizing foot vibrations and simulated optical flow was developed. The current study sought to augment this system and examine the effect of an avatar's cast shadow and foot vibrations on the virtual walking experience and cybersickness. The omnidirectional movie and the avatar's walking animation were synchronized, with the cast shadow reflecting the avatar's movement on the ground. Twenty participants were exposed to the virtual walking in six conditions (with/without foot vibrations and no/short/long shadow) and were asked to rate their sense of telepresence, walking experience, and occurrences of cybersickness. Our findings indicate that the synchronized foot vibrations enhanced telepresence as well as self-motion, walking, and leg-action sensations, while also reducing instances of nausea and disorientation sickness. The avatar's cast shadow was found to improve telepresence and leg-action sensation, but had no impact on self-motion and walking sensation. These results suggest that observation of the self-body cast shadow does not directly improve walking sensation, but is effective in enhancing telepresence and leg-action sensation, while foot vibrations are effective in improving telepresence and walking experience and reducing instances of cybersickness

    Perceiving Direction of a Walker: Effect of Body Appearance

    No full text
    Human can perceive others' walking direction accurately even with 117ms observation (Sato, et al., ECVP2008). We aimed to see whether appearance of walker's body affects the accuracy of perceiving direction of the walker. Thus, we employed three different appearances: realistic human computer-graphics body (CG-human), nonrealistic cylinder-assembled body (Cylinders), and point-light walker (Points). We made a three-dimensional model of an adult-size walker who walked at a place. CG-human stimuli were generated by rendering the model with smooth shading. We made Cylinders stimuli by replacing body parts such as arms, legs, head, and hands with cylinders. Points stimuli were made by tracking 18 positions (mostly joints) of the body like biological motion. One of walkers was presented for 117, 250, 500 or 1000ms while its direction was randomly varied by 3deg steps to 21deg left or right. Observers judged whether the walker was walking toward them (hit) or not (miss), and self-range was measured in terms of the standard deviation for hit distributions. The perceived self-range was narrowed with long duration, and with CG-human stimulus. It is suggested that the accuracy of perceiving walker's direction depends on body appearance, and it is higher for human-like body than nonhuman body

    Effect of connection induced upper body movements on embodiment towards a limb controlled by another during virtual co-embodiment.

    No full text
    Even if we cannot control them, or when we receive no tactile or proprioceptive feedback from them, limbs attached to our bodies can still provide indirect proprioceptive and haptic stimulations to the body parts they are attached to simply due to the physical connections. In this study we investigated whether such indirect movement and haptic feedbacks from a limb contribute to a feeling of embodiment towards it. To investigate this issue, we developed a 'Joint Avatar' setup in which two individuals were given full control over the limbs in different sides (left and right) of an avatar during a reaching task. The backs of the two individuals were connected with a pair of solid braces through which they could exchange forces and match the upper body postures with one another. Coupled with the first-person view, this simulated an experience of the upper body being synchronously dragged by the partner-controlled virtual arm when it moved. We observed that this passive synchronized upper-body movement significantly reduced the feeling of the partner-controlled limb being owned or controlled by another. In summary, our results suggest that even in total absence of control, connection induced upper body movements synchronized with the visible limb movements can positively affect the sense of embodiment towards partner-controlled or autonomous limbs

    Great apes’ understanding of biomechanics: eye-tracking experiments using three-dimensional computer-generated animations

    Get PDF
    Visual processing of the body movements of other animals is important for adaptive animal behaviors. It is widely known that animals can distinguish articulated animal movements even when they are just represented by points of light such that only information about biological motion is retained. However, the extent to which nonhuman great apes comprehend the underlying structural and physiological constraints affecting each moving body part, i.e., biomechanics, is still unclear. To address this, we examined the understanding of biomechanics in bonobos (Pan paniscus) and chimpanzees (Pan troglodytes), following a previous study on humans (Homo sapiens). Apes underwent eye tracking while viewing three-dimensional computer-generated (CG) animations of biomechanically possible or impossible elbow movements performed by a human, robot, or nonhuman ape. Overall, apes did not differentiate their gaze between possible and impossible movements of elbows. However, some apes looked at elbows for longer when viewing impossible vs. possible robot movements, which indicates that they may have had knowledge of biomechanics and that this knowledge could be extended to a novel agent. These mixed results make it difficult to draw a firm conclusion regarding the extent to which apes understand biomechanics. We discuss some methodological features that may be responsible for the results, as well as implications for future nonhuman animal studies involving the presentation of CG animations or measurement of gaze behaviors

    Measuring empathy for human and robot hand pain using electroencephalography

    Get PDF
    This study provides the first physiological evidence of humans € ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans

    Two-stage model for material property perception

    No full text
    <p>The first stage (2D) makes use of pictorial cues including simple statistics such as image moments to estimate material properties including lightness. The second stage (3D) makes use of available depth cues to estimate scene layout and the light field and to further correct material property estimates based on the estimates of scene layout and light field. See text.</p

    Depth Cue Effects without Pictorial Cues.

    No full text
    <p>Motion Parallax and Binocular Disparity. The effect of motion parallax alone, binocular disparity alone, and both motion parallax and binocularly disparity on luminance settings with the effect of pictorial cues subtracted.</p
    corecore