156 research outputs found

    Motion Rail: A Virtual Reality Level Crossing Training Application

    Get PDF
    This paper presents the development and usability testing of a Virtual Reality (VR) based system named 'Motion Rail' for training children on railway crossing safety. The children are to use a VR head mounted device and a controller to navigate the VR environment to perform a level crossing task and they will receive instant feedback on pass or failure on a display in the VR environment. Five participants consisting of two male and three females were considered for the usability test. The outcomes of the test was promising, as the children were very engaging and will like to adopt this training approach in future safety training

    The Rocketbox Library and the Utility of Freely Available Rigged Avatars

    Get PDF
    As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption

    Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments

    Get PDF
    Background When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. Principal Findings In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants ‘passed’ (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. Conclusions Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation

    Social presence and dishonesty in retail

    Get PDF
    Self-service checkouts (SCOs) in retail can benefit consumers and retailers, providing control and autonomy to shoppers independent from staff, together with reduced queuing times. Recent research indicates that the absence of staff may provide the opportunity for consumers to behave dishonestly, consistent with a perceived lack of social presence. This study examined whether a social presence in the form of various instantiations of embodied, visual, humanlike SCO interface agents had an effect on opportunistic behaviour. Using a simulated SCO scenario, participants experienced various dilemmas in which they could financially benefit themselves undeservedly. We hypothesised that a humanlike social presence integrated within the checkout screen would receive more attention and result in fewer instances of dishonesty compared to a less humanlike agent. This was partially supported by the results. The findings contribute to the theoretical framework in social presence research. We concluded that companies adopting self-service technology may consider the implementation of social presence in technology applications to support ethical consumer behaviour, but that more research is required to explore the mixed findings in the current study.<br/

    An integration of enhanced social force and crowd control models for high-density crowd simulation

    Get PDF
    Social force model is one of the well-known approaches that can successfully simulate pedestrians’ movements realistically. However, it is not suitable to simulate high-density crowd movement realistically due to the model having only three basic crowd characteristics which are goal, attraction, and repulsion. Therefore, it does not satisfy the high-density crowd condition which is complex yet unique, due to its capacity, density, and various demographic backgrounds of the agents. Thus, this research proposes a model that improves the social force model by introducing four new characteristics which are gender, walking speed, intention outlook, and grouping to make simulations more realistic. Besides, the high-density crowd introduces irregular behaviours in the crowd flow, which is stopping motion within the crowd. To handle these scenarios, another model has been proposed that controls each agent with two different states: walking and stopping. Furthermore, the stopping behaviour was categorized into a slow stop and sudden stop. Both of these proposed models were integrated to form a high-density crowd simulation framework. The framework has been validated by using the comparison method and fundamental diagram method. Based on the simulation of 45,000 agents, it shows that the proposed framework has a more accurate average walking speed (0.36 m/s) compared to the conventional social force model (0.61 m/s). Both of these results are compared to the real-world data which is 0.3267 m/s. The findings of this research will contribute to the simulation activities of pedestrians in a highly dense population

    A Turing-Like Handshake Test for Motor Intelligence

    Full text link
    Abstract. In the Turing test, a computer model is deemed to “think intelligently ” if it can generate answers that are not distinguishable from those of a human. This test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, with the human hand movement being a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human, artificial, or a linear combination of the two). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a forced-choice method and ask which of two systems is more humanlike. By comparing a given model with a weighted sum of human and artificial systems, we fit a psychometric curve to the answers of the interrogator and extract a quantitative measure for the computer model in terms of similarity to the human handshake

    Judgment of the Humanness of an Interlocutor Is in the Eye of the Beholder

    Get PDF
    Despite tremendous advances in artificial language synthesis, no machine has so far succeeded in deceiving a human. Most research focused on analyzing the behavior of “good” machine. We here choose an opposite strategy, by analyzing the behavior of “bad” humans, i.e., humans perceived as machine. The Loebner Prize in Artificial Intelligence features humans and artificial agents trying to convince judges on their humanness via computer-mediated communication. Using this setting as a model, we investigated here whether the linguistic behavior of human subjects perceived as non-human would enable us to identify some of the core parameters involved in the judgment of an agents' humanness. We analyzed descriptive and semantic aspects of dialogues in which subjects succeeded or failed to convince judges of their humanness. Using cognitive and emotional dimensions in a global behavioral characterization, we demonstrate important differences in the patterns of behavioral expressiveness of the judges whether they perceived their interlocutor as being human or machine. Furthermore, the indicators of interest displayed by the judges were predictive of the final judgment of humanness. Thus, we show that the judgment of an interlocutor's humanness during a social interaction depends not only on his behavior, but also on the judge himself. Our results thus demonstrate that the judgment of humanness is in the eye of the beholder
    corecore