197 research outputs found

    Affective gaming using adaptive speed controlled by biofeedback

    Get PDF
    This work is part of a larger project exploring how affective computing can support the design of player-adaptive video games. We investigate how controlling some of the game mechanics using biofeedback affects physiological reactions, performance, and the experience of the player. More specifically, we assess how different game speeds affect player physiological responses and game performance. We developed a game prototype with Unity1 which includes a biofeedback loop system based on the level of physiological activation through skin resistance (SKR) measured with a smart wristband. In two conditions, the player moving speed was driven by SKR, to increase (respectively decrease) speed when the player is less activated (SKR decreases). A control condition was also used where player speed is not affected by SKR. We collected and synchronized biosignals (heart rate [HR], skin temperature [SKT] and SKR), and game information, such as the total time to complete a level, the number of ennemy collisions, and their timestamps. Additionally, emotional profiling (TIPI, I-Panas-SF), measured using a Likert scale in a post-task questionnaire, and semi-open questions about the game experience were used. The results show that SKR was significantly higher in the speed down condition, and game performance improved in the speed up condition. Study collected data involved 13 participants (10 males, 3 females) aged from 18 to 50 (M = 24.30, SD = 9.00). Most of the participants felt engaged with the game (M = 6.46, SD = 0.96) and their level of immersion was not affected by wearing the prototype smartband. Thematic analysis (TA) revealed that the game speed impacted the participants stress levels such as high speed was more stressful than hypothesized; many participants described game level-specific effects in which they felt that their speed of movement reflected their level of stress or relaxation. Slowing down the participants indeed increased the participant stress levels, but counter intuitively, more stress was detected in high speed situations

    Novel Methods in Facilitating Audience and Performer Interaction Using the Mood Conductor Framework

    Get PDF
    While listeners’ emotional response to music is the subject of numerous studies, less attention is paid to the dynamic emotion variations due to the interaction between artists and audiences in live improvised music performances. By opening a direct communication channel from audience members to performers, the Mood Conductor system provides an experimental framework to study this phenomenon. Mood Conductor facilitates interactive performances and thus also has an inherent entertainment value. The framework allows audience members to send emotional directions using their mobile devices in order to “conduct” improvised performances. Emotion coordinates indicted by the audience in the arousal-valence space are aggregated and clustered to create a video projection. This is used by the musicians as guidance, and provides visual feedback to the audience. Three different systems were developed and tested within our framework so far. These systems were trialled in several public performances with different ensembles. Qualitative and quantitative evaluations demonstrated that musicians and audiences were highly engaged with the system, and raised new insights enabling future improvements of the framework

    Moodplay: an interactive mood-based musical experience

    Get PDF

    Designing Computationally Creative Musical Performance Systems

    Get PDF
    This is work in progress where we outline a design process for a computationally creative musical performance system using the Creative Systems Framework (CSF). The proposed system is intended to produce virtuosic interpretations, and subsequent synthesized renderings of these interpretations with a physical model of a bass guitar, using case-based reasoning and reflection. We introduce our interpretations of virtuosity and musical performance, outline the suitability of case-based reasoning in computationally creative systems and introduce notions of computational creativity and the CSF. We design our system by formalising the components of the CSF and briefly outline a potential implementation. In doing so, we demonstrate how the CSF can be used as a tool to aid in designing computationally creative musical performance systems

    Audio Features Affected by Music Expressiveness

    Full text link
    Within a Music Information Retrieval perspective, the goal of the study presented here is to investigate the impact on sound features of the musician's affective intention, namely when trying to intentionally convey emotional contents via expressiveness. A preliminary experiment has been performed involving 1010 tuba players. The recordings have been analysed by extracting a variety of features, which have been subsequently evaluated by combining both classic and machine learning statistical techniques. Results are reported and discussed.Comment: Submitted to ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016), Pisa, Italy, July 17-21, 201

    Crossroads: Interactive Music Systems Transforming Performance, Production and Listening

    Get PDF
    date-added: 2017-12-22 18:26:58 +0000 date-modified: 2017-12-22 18:38:33 +0000 keywords: mood-based interaction, intelligent music production, HCI local-url: https://qmro.qmul.ac.uk/xmlui/handle/123456789/12502 publisher-url: http://mcl.open.ac.uk/music-chi/uploads/19/HCIMUSIC_2016_paper_15.pdf bdsk-url-1: https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/12502/Barthet%20Crossroads%3A%20Interactive%20Music%20Systems%202016%20Accepted.pdfdate-added: 2017-12-22 18:26:58 +0000 date-modified: 2017-12-22 18:38:33 +0000 keywords: mood-based interaction, intelligent music production, HCI local-url: https://qmro.qmul.ac.uk/xmlui/handle/123456789/12502 publisher-url: http://mcl.open.ac.uk/music-chi/uploads/19/HCIMUSIC_2016_paper_15.pdf bdsk-url-1: https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/12502/Barthet%20Crossroads%3A%20Interactive%20Music%20Systems%202016%20Accepted.pdfdate-added: 2017-12-22 18:26:58 +0000 date-modified: 2017-12-22 18:38:33 +0000 keywords: mood-based interaction, intelligent music production, HCI local-url: https://qmro.qmul.ac.uk/xmlui/handle/123456789/12502 publisher-url: http://mcl.open.ac.uk/music-chi/uploads/19/HCIMUSIC_2016_paper_15.pdf bdsk-url-1: https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/12502/Barthet%20Crossroads%3A%20Interactive%20Music%20Systems%202016%20Accepted.pdfWe discuss several state-of-the-art systems that propose new paradigms and user workflows for music composition, production, performance, and listening. We focus on a selection of systems that exploit recent advances in semantic and affective computing, music information retrieval (MIR) and semantic web, as well as insights from fields such as mobile computing and information visualisation. These systems offer the potential to provide transformative experiences for users, which is manifested in creativity, engagement, efficiency, discovery and affect

    Sketching sounds: an exploratory study on sound-shape associations

    Get PDF
    Sound synthesiser controls typically correspond to technical parameters of signal processing algorithms rather than intuitive sound descriptors that relate to human perception of sound. This makes it difficult to realise sound ideas in a straightforward way. Cross-modal mappings, for example between gestures and sound, have been suggested as a more intuitive control mechanism. A large body of research shows consistency in human associations between sounds and shapes. However, the use of drawings to drive sound synthesis has not been explored to its full extent. This pa- per presents an exploratory study that asked participants to sketch visual imagery of sounds with a monochromatic digital drawing interface, with the aim to identify different representational approaches and determine whether timbral sound characteristics can be communicated reliably through visual sketches. Results imply that the development of a synthesiser exploiting sound-shape associations is feasible, but a larger and more focused dataset is needed in followup studies

    Easy clip to treat anal fistula tracts: a word of caution

    No full text
    International audienceBackground and aims: Closing the internal opening by a clip ovesco has been recently proposed for healing the fistula tract, but, to date, data on benefit are poorly analyzed. The aim was to report a preliminary multicenter experience. Materials and methods Retrospective study was undertaken in six different French centers: surgical procedure, immediate complications, and follow-up have been collected. Results Nineteen clips were inserted in 17 patients (M/F, 4/13; median age, 42 years [29–54]) who had an anal fistula: 12 (71 %) high fistulas (including 4 rectovaginal fistulas), 5 (29 %) lower fistulas (with 3 rectovaginal fistulas), and 6 (35 %) Crohn’s fistulas. Out of 17 patients, 15 had a seton drainage beforehand. The procedure was easy in 8 (47 %) patients and the median operative time was 27.5 min (20–36.5). Postoperative period was painful for 11 (65 %) patients. A clip migration was noted in 11 patients (65 %) after a median follow-up of 10 days (5.5–49.8). Eleven patients (65 %) who failed had reoperation including 10 new drainages within the first month (0.5–5). After a mean follow-up of 4 months (2–7),, closing the tract was observed in 2 patients (12 %) following the first insertion of the clip and in another one after a second insertion. Conclusion: Treatment of anal fistula by placing a clip on the internal opening is disappointing and deleterious for some patients. A better assessment before dissemination is recommended
    • …
    corecore