28 research outputs found

    Interactive Gaming Reduces Experimental Pain With or Without a Head Mounted Display

    Get PDF
    While virtual reality environments have been shown to reduce pain, the precise mechanism that produces the pain attenuating effect has not been established. It has been suggested that it may be the ability to command attentional resources with the use of head mounted displays (HMDs) or the interactivity of the environment. Two experiments compared participants’ pain ratings to high and low levels of electrical stimulation while engaging in interactive gaming with an HMD. In the first, gaming with the HMD was compared to a positive emotion induction condition; and in the second experiment the HMD was compared to a condition in which the game was projected onto a wall. Interactive gaming significantly reduced numerical ratings of painful stimuli when compared to the baseline and affect condition. However, when the two gaming conditions were directly compared, they equally reduced participants’ pain ratings. These data are consistent with past research showing that interactive gaming can attenuate experimentally induced pain and its effects are comparable whether presented in a head mounted display or projected on a wall

    Usability and acceptability assessment of an empathic virtual agent to prevent major depression

    Full text link
    In Human-Computer Interaction, the adaptation of the content and the way of how this content is communicated to the users in interactive sessions is a critical issue to promote the acceptability and usability of any computational system. We present a user-adapted interactive platform to identify and provide an early intervention for symptoms of depression and suicide. In particular, we describe the work performed to assess users' system acceptability and usability. An empathic Virtual Agent is the main interface with the user, and it has been designed to generate the appropriate dialogues and emotions during the interactions according to the detected user's specific needs. This personalization is based on a dynamic user model nurtured with clinical, demographical and behavioural information. The evaluation was performed with 60 participants from the university community. The obtained results were promising, allowing the execution of a further clinical trial. The system's usability score was 75.7%, and the score of the user-adapted content and the emotional responses of the Virtual Agent was 70.9%.The work presented in this manuscript has been partially funded by the Conselleria de Sanidad of Generalitat Valenciana, in the research project entitled 'Sistema computacional de ayuda a la prevencion de episodios de depresion y suicidio - PREVENDEP'. We thank the company Faceshift (www.faceshift.com) for providing their software to perform facial motion capture in order to develop the talking head that represent our empathic virtual agent.Bresó Guardado, A.; Martinez-Miranda, J.; Botella Arbona, C.; Baños Rivera, RM.; García Gómez, JM. (2016). Usability and acceptability assessment of an empathic virtual agent to prevent major depression. Expert Systems. 33(4):297-312. doi:10.1111/exsy.12151S29731233

    Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments

    Get PDF
    Background When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. Principal Findings In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants ‘passed’ (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. Conclusions Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation

    Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    Get PDF
    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles

    Can a Virtual Cat Persuade You? The Role of Gender and Realism in Speaker Persuasiveness

    No full text
    This study examines the roles of gender and visual realism in the persuasiveness of speakers. Participants were presented with a persuasive passage delivered by a male or female person, virtual human, or virtual character. They were then assessed on attitude change and their ratings of the argument, message, and speaker. The results indicated that the virtual speakers were as effective at changing attitudes as real people. Male participants were more persuaded when the speaker was female than when the speaker was male, whereas female participants were more persuaded when the speaker was male than when the speaker was female. Cross gender interactions occurred across all conditions, suggesting that some of the gender stereotypes that occur with people may carry over to interaction with virtual characters. Ratings of the perceptions of the speaker were more favorable for virtual speakers than for human speakers. We discuss the application of these findings in the design of persuasive human computer interfaces
    corecore