16 research outputs found

    Towards Learning Affective Body Gesture

    Get PDF
    Robots are assuming an increasingly important role in our society. They now become pets and help support children healing. In other words, they are now trying to entertain an active and affective communication with human agents. However, up to now, such systems have primarily relied on the human agents' ability to empathize with the system. Changes in the behavior of the system could therefore reult in changes of mood or behavior in the human partner. This paper describes experiments we carried out to study the importance of body language in affective communication. The results of the experiments led us to develop a system that can incrementally learn to recognize affective messages conveyed by body postures

    Non-representational Interaction Design

    Get PDF

    Customizing by Doing for Responsive Video Game Characters

    Get PDF
    This paper presents a game in which players can customize the behavior of their characters using their own movements while playing the game. Players’ movements are recorded with a motion capture system. The player then labels the movements and uses them as input to a machine learning algorithm that generates a responsive behavior model. This interface supports a more embodied approach to character design that we call “Customizing by Doing”. We present a user study that shows that using their own movements made the users feel more engaged with the game and the design process, due in large part to a feeling of personal ownership of the movement

    Bodily Expression of Social Initiation Behaviors in ASC and non-ASC children: Mixed Reality vs. LEGO Game Play

    Get PDF
    This study is part of a larger project that showed the potential of our mixed reality (MR) system in fostering social initiation behaviors in children with Autism Spectrum Condition (ASC). We compared it to a typical social intervention strategy based on construction tools, where both mediated a face-to-face dyadic play session between an ASC child and a non-ASC child. In this study, our first goal is to show that an MR platform can be utilized to alter the nonverbal body behavior between ASC and non-ASC during social interaction as much as a traditional therapy setting (LEGO). A second goal is to show how these body cues differ between ASC and non-ASC children during social initiation in these two platforms. We present our first analysis of the body cues generated under two conditions in a repeated-measures design. Body cue measurements were obtained through skeleton information and characterized in the form of spatio-temporal features from both subjects individually (e.g. distances between joints and velocities of joints), and interpersonally (e.g. proximity and visual focus of attention). We used machine learning techniques to analyze the visual data of eighteen trials of ASC and non-ASC dyads. Our experiments showed that: (i) there were differences between ASC and non-ASC bodily expressions, both at individual and interpersonal level, in LEGO and in the MR system during social initiation; (ii) the number of features indicating differences between ASC and non-ASC in terms of nonverbal behavior during initiation were higher in the MR system as compared to LEGO; and (iii) computational models evaluated with combination of these different features enabled the recognition of social initiation type (ASC or non-ASC) from body features in LEGO and in MR settings. We did not observe significant differences between the evaluated models in terms of performance for LEGO and MR environments. This might be interpreted as the MR system encouraging similar nonverbal behaviors in children, perhaps more similar than the LEGO environment, as the performance scores in the MR setting are lower as compared to the LEGO setting. These results demonstrate the potential benefits of full body interaction and MR settings for children with ASC.EPSR

    Embodied Design of Dance Visualisations

    Get PDF
    This paper presents the design and implementation of a software platform for creating interactive visualisations that respond to the free-form movements of a non-professional dancer. The visualisations can be trained to respond to the idiosyncratic movements of an individual dancer. This adaptive process is controlled by Interactive Machine Learning. Our approach is novel because the behaviour of the interactive visualisations is trained by a dancer dancing, rather than a computer scientist explicitly programming rules. In this way IML enables an `embodied' form of design, where a dancer can design an interactive system by moving, rather than by analysing movement. This embodied design process taps into and supports our natural and embodied human understanding of movement. We hope the process of designing an interactive experience for free form dance will help us to understand more about how to create embodied interfaces and allow us to build a general frame- work for embodied interaction. We would also like to create a compelling, embodied and enjoyable experience with more satisfying interactions than previous dance computer games which use pre-scripted routines where a player must repeat a sequence of moves. The system was developed using a participatory methodology, with a software developer and an interaction designer working in partnership with users to test and refine two prototypes of the system. A third prototype has been built but not yet tested

    A portal between real and unreal

    Get PDF
    This thesis explores the field of light, space and human perception. Human perception is an active, information-seeking process, but when this information is not clear our mind tries to fool us by filling the emptiness. This unclear structure is also called the Ganzfeld effect. When being exposed to the Ganzfeld effect hallucinations may occur. The experiment set up for this thesis was to see if a stimulus of light could be used to prevent the mind drifting off into hallucinations and keeping a clear understanding of the space. Based on previous literature research two experiments were set up. In the first experiment I expored myself how the Ganzfeld in nature affected me. In the second experiment six participants including myself experienced the Ganzfeld effect combined with a low light stimulus.  This thesis concludes that an exposure to the Ganzfeld effect in combination with a low light stimulus prevents from complete hallucinations. When using a light stimulus an inbetween world is created. A clear description of this reality in spatial terms was attempted. Without the light stimulus the mind was able to drift off and went into an unreal world. When a light stimulus was given, the mind went back to the real and the space could be clearly understood

    Recognizing Affective Dimensions from Body Posture

    No full text
    Abstract. The recognition of affective human communication may be used to provide developers with a rich source of information for creating systems that are capable of interacting well with humans. Posture has been acknowledged as an important modality of affective communication in many fields. Behavioral studies have shown that posture can communicate discrete emotion categories as well as affective dimensions. In the affective computing field, while models for the automatic recognition of discrete emotion categories from posture have been proposed, to our knowledge, there are no models for the automatic recognition of affective dimensions from static posture. As a continuation of our previous study, the two main goals of this study are: i) to build automatic recognition models to discriminate between levels of affective dimensions based on low-level postural features; and ii) to investigate both the discriminative power and the limitations of the postural features proposed. The models were built on the basis of human observers ’ ratings of posture according to affective dimensions directly (instead of emotion category) in conjunction with our posture features.
    corecore