46,050 research outputs found

    How can I produce a digital video artefact to facilitate greater understanding among youth workers of their own learning-to-learn competence?

    Get PDF
    In Ireland, youth work is delivered largely in marginalised communities and through non-formal and informal learning methods. Youth workers operate in small isolated organisations without many of the resources and structures to improve practice that is afforded to larger formal educational establishments. Fundamental to youth work practice is the ability to identify and construct learning experiences for young people in non-traditional learning environments. It is therefore necessary for youth workers to develop a clear understanding of their own learning capacity in order to facilitate learning experiences for young people. In the course of this research, I attempted to use technology to enhance and support the awareness among youth workers of their own learning capacity by creating a digital video artifact that explores the concept – learning-to-learn. This study presents my understanding of the learning-to-learn competence as, I sought to improve my practice as a youth service manager and youth work trainer. This study was conducted using an action research approach. I designed and evaluated the digital media artifact – “Lenny’s Quest” in collaboration with staff and trainer colleagues in the course of two cycles of action research, and my research was critiqued and validated throughout this process

    A Mimetic Strategy to Engage Voluntary Physical Activity In Interactive Entertainment

    Full text link
    We describe the design and implementation of a vision based interactive entertainment system that makes use of both involuntary and voluntary control paradigms. Unintentional input to the system from a potential viewer is used to drive attention-getting output and encourage the transition to voluntary interactive behaviour. The iMime system consists of a character animation engine based on the interaction metaphor of a mime performer that simulates non-verbal communication strategies, without spoken dialogue, to capture and hold the attention of a viewer. The system was developed in the context of a project studying care of dementia sufferers. Care for a dementia sufferer can place unreasonable demands on the time and attentional resources of their caregivers or family members. Our study contributes to the eventual development of a system aimed at providing relief to dementia caregivers, while at the same time serving as a source of pleasant interactive entertainment for viewers. The work reported here is also aimed at a more general study of the design of interactive entertainment systems involving a mixture of voluntary and involuntary control.Comment: 6 pages, 7 figures, ECAG08 worksho

    Capture, Learning, and Synthesis of 3D Speaking Styles

    Full text link
    Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201
    corecore