16,321 research outputs found

    Communicating and accentuating the aesthetic and expressive dimension in choral conducting

    Get PDF
    This article considers the issues that are involved in effective choral conducting from an aesthetic dimension. Drawing upon research, theories and practice, it provides some insight into the nature of communication and the significance of gesture on vocal outcome as well as qualities of leadership concomitant with such musical activity. The article also reports on a research study that investigated the professional development of students and teachers in the area of choral conducting, focusing on their attitudes, skill acquisition and the importance attached to reflection on practice. The findings reveal that consideration of what counts as effective conducting gesture and communication skill can promote better conducting and, consequently, better, more expressive singing. In addition, the positive impact of self and peer reflection on progress (both face-to-face and within a virtual learning environment) was also acknowledged. Certain suggestions for promoting effective musical leadership in the area of choral conducting are provided, in order to ground theoretical perspectives in practice

    Building artificial personalities: expressive communication channels based on an interlingua for a human-robot dance

    Get PDF
    The development of artificial personalities requires that we develop a further understanding of how personality is communicated. This can be done through developing humanrobot interaction (HRI). In this paper we report on the development of the SpiderCrab robot. This uses an interlingua based on Laban Movement Analysis (LMA) to intermediate a human-robot dance. Specifically, we developed measurements to analyse data in real time from a simple vision system and implemented a simple stochastic dancing algorithm on a custom built robot. This shows how, through some simple rules, a personality can emerge by biasing random behaviour. The system was tested with professional dancers and members of the public and the results (formal and anecdotal) are presented herein

    What does touch tell us about emotions in touchscreen-based gameplay?

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence

    Human Nonverbal Behaviour Understanding in the Wild for New Media Art

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-02714-2_

    Foundation phase: framework for children's learning

    Get PDF

    Design and semantics of form and movement (DeSForM 2006)

    Get PDF
    Design and Semantics of Form and Movement (DeSForM) grew from applied research exploring emerging design methods and practices to support new generation product and interface design. The products and interfaces are concerned with: the context of ubiquitous computing and ambient technologies and the need for greater empathy in the pre-programmed behaviour of the ‘machines’ that populate our lives. Such explorative research in the CfDR has been led by Young, supported by Kyffin, Visiting Professor from Philips Design and sponsored by Philips Design over a period of four years (research funding £87k). DeSForM1 was the first of a series of three conferences that enable the presentation and debate of international work within this field: ‱ 1st European conference on Design and Semantics of Form and Movement (DeSForM1), Baltic, Gateshead, 2005, Feijs L., Kyffin S. & Young R.A. eds. ‱ 2nd European conference on Design and Semantics of Form and Movement (DeSForM2), Evoluon, Eindhoven, 2006, Feijs L., Kyffin S. & Young R.A. eds. ‱ 3rd European conference on Design and Semantics of Form and Movement (DeSForM3), New Design School Building, Newcastle, 2007, Feijs L., Kyffin S. & Young R.A. eds. Philips sponsorship of practice-based enquiry led to research by three teams of research students over three years and on-going sponsorship of research through the Northumbria University Design and Innovation Laboratory (nuDIL). Young has been invited on the steering panel of the UK Thinking Digital Conference concerning the latest developments in digital and media technologies. Informed by this research is the work of PhD student Yukie Nakano who examines new technologies in relation to eco-design textiles

    A transparent framework towards the context-sensitive recognition of conversational engagement

    Get PDF
    Modelling and recognising affective and mental user states is an urging topic in multiple research fields. This work suggests an approach towards adequate recognition of such states by combining state-of-the-art behaviour recognition classifiers in a transparent and explainable modelling framework that also allows to consider contextual aspects in the inference process. More precisely, in this paper we exemplify the idea of our framework with the recognition of conversational engagement in bi-directional conversations. We introduce a multi-modal annotation scheme for conversational engagement. We further introduce our hybrid approach that combines the accuracy of state-of-the art machine learning techniques, such as deep learning, with the capabilities of Bayesian Networks that are inherently interpretable and feature an important aspect that modern approaches are lacking - causal inference. In an evaluation on a large multi-modal corpus of bi-directional conversations, we show that this hybrid approach can even outperform state-of-the-art black-box approaches by considering context information and causal relations

    Towards responsive Sensitive Artificial Listeners

    Get PDF
    This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness
    • 

    corecore