9,799 research outputs found

    A Trip to the Moon: Personalized Animated Movies for Self-reflection

    Full text link
    Self-tracking physiological and psychological data poses the challenge of presentation and interpretation. Insightful narratives for self-tracking data can motivate the user towards constructive self-reflection. One powerful form of narrative that engages audience across various culture and age groups is animated movies. We collected a week of self-reported mood and behavior data from each user and created in Unity a personalized animation based on their data. We evaluated the impact of their video in a randomized control trial with a non-personalized animated video as control. We found that personalized videos tend to be more emotionally engaging, encouraging greater and lengthier writing that indicated self-reflection about moods and behaviors, compared to non-personalized control videos

    Affective interactions between expressive characters

    Get PDF
    When people meet in virtual worlds they are represented by computer animated characters that lack a variety of expression and can seem stiff and robotic. By comparison human bodies are highly expressive; a casual observation of a group of people mil reveals a large diversity of behavior, different postures, gestures and complex patterns of eye gaze. In order to make computer mediated communication between people more like real face-to-face communication, it is necessary to add an affective dimension. This paper presents Demeanour, an affective semi-autonomous system for the generation of realistic body language in avatars. Users control their avatars that in turn interact autonomously with other avatars to produce expressive behaviour. This allows people to have affectively rich interactions via their avatars

    Meanings in motion and faces: Developmental associations between the processing of intention from geometrical animations and gaze detection accuracy

    Get PDF
    Aspects of face processing, on the one hand, and theory of mind (ToM) tasks, on the other hand, show specific impairment in autism. We aimed to discover whether a correlation between tasks tapping these abilities was evident in typically developing children at two developmental stages. One hundred fifty-four normal children (6-8 years and 16-18 years) and 13 high-IQ autistic children (11-17 years) were tested on a range of face-processing and IQ tasks, and a ToM test based oil the attribution of intentional movement to abstract shapes in a cartoon. By midchildhood, the ability accurately and spontaneously to infer the locus of attention of a face with direct or averted gaze was specifically associated with the ability to describe geometrical animations using mental state terms. Other face-processing and animation descriptions failed to show the association. Autistic adolescents were impaired at both gaze processing and ToM descriptions. using these tests. Mentalizing and gaze perception accuracy are associated in typically developing children and adolescents. The findings are congruent with the possibility that common neural Circuitry underlies, at least in part, processing implicated in these tasks. They are also congruent with the possibility that autism may lie at one end of a developmental continuum with respect to these skills, and to the factor(s) underpinning them

    Group emotion modelling and the use of middleware for virtual crowds in video-games

    Get PDF
    In this paper we discuss the use of crowd simulation in video-games to augment their realism. Using previous works on emotion modelling and virtual crowds we define a game world in an urban context. To achieve that, we explore a biologically inspired human emotion model, investigate the formation of groups in crowds, and examine the use of physics middleware for crowds. Furthermore, we assess the realism and computational performance of the proposed approach. Our system runs at interactive frame-rate and can generate large crowds which demonstrate complex behaviour

    Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter

    Get PDF
    Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment

    Integrating internal behavioural models with external expression

    Get PDF
    Users will believe in a virtual character more if they can empathise with it and understand what ‘makes it tick’. This will be helped by making the motivations of the character, and other processes that go towards creating its behaviour, clear to the user. This paper proposes that this can be achieved by linking the behavioural or cognitive system of the character to expressive behaviour. This idea is discussed in general and then demonstrated with an implementation that links a simulation of perception to the animation of a character’s eyes
    • 

    corecore