101 research outputs found

    Affective gaming using adaptive speed controlled by biofeedback

    Get PDF
    This work is part of a larger project exploring how affective computing can support the design of player-adaptive video games. We investigate how controlling some of the game mechanics using biofeedback affects physiological reactions, performance, and the experience of the player. More specifically, we assess how different game speeds affect player physiological responses and game performance. We developed a game prototype with Unity1 which includes a biofeedback loop system based on the level of physiological activation through skin resistance (SKR) measured with a smart wristband. In two conditions, the player moving speed was driven by SKR, to increase (respectively decrease) speed when the player is less activated (SKR decreases). A control condition was also used where player speed is not affected by SKR. We collected and synchronized biosignals (heart rate [HR], skin temperature [SKT] and SKR), and game information, such as the total time to complete a level, the number of ennemy collisions, and their timestamps. Additionally, emotional profiling (TIPI, I-Panas-SF), measured using a Likert scale in a post-task questionnaire, and semi-open questions about the game experience were used. The results show that SKR was significantly higher in the speed down condition, and game performance improved in the speed up condition. Study collected data involved 13 participants (10 males, 3 females) aged from 18 to 50 (M = 24.30, SD = 9.00). Most of the participants felt engaged with the game (M = 6.46, SD = 0.96) and their level of immersion was not affected by wearing the prototype smartband. Thematic analysis (TA) revealed that the game speed impacted the participants stress levels such as high speed was more stressful than hypothesized; many participants described game level-specific effects in which they felt that their speed of movement reflected their level of stress or relaxation. Slowing down the participants indeed increased the participant stress levels, but counter intuitively, more stress was detected in high speed situations

    Open Symphony: Creative Participation for Audiences of Live Music Performances

    Get PDF
    This work is partly supported by the FAST-IMPACt EPSRC project (EP/L019981/1), the Centre for Digital Music EPSRC Platform Grant (EP/E045235/1), the EU H2020 Audio Commons project (688382), QMUL's Centre for Public Engagement, the China Scholarship Council, and Arts Council England (Sound and Music Organisation Audience Labs)

    Influence of Music on Perceived Emotions in Film

    Get PDF
    Film music plays a core role in film production and reception as it not only contributes to the film aesthetics and creativity, but it also affects viewers' experience and enjoyment. Film music composers often aim to serve the film narrative, immerse viewers into the setting and story, convey clues, and importantly, act on their emotions. Yet, how film music influences viewers is still misunderstood. We conducted a perceptual study to analyse the impact of music on the perception of emotions in film. We developed an online interface for time-based emotion annotation of audio/video media clips based on the Valence/Arousal (VA) two-dimensional model. Participants reported their perceived emotions over time in the VA space for three media conditions: film scene presented without sound (video only), film music presented without video (audio only), and film scene with accompanying music and sound effects (both video and audio modalities). 16 film clips were selected covering four clips for each of four genres (action & drama, romance, comedy, and horror). 38 participants completed the study (12 females and 26 males from many countries, average age: 28.9). Density scatter plots are used to visualise the spread of emotion ratings in the VA space and differences across media conditions and film clips. Results from linear mixed effect models show significant effects of the audiovisual media condition and film genre on VA ratings, in line with previous results by Parke et al. [1]. Perceived VA ratings across media conditions follow an almost linear relationship with an increase in strength in the following order: film alone, film with music/sound, music alone. We illustrate this effect by plotting the VA rating centre of mass across conditions. VA ratings for the film-alone condition are closer to the origin of the space compared to the two other media conditions, indicating that the inclusion of music yields stronger emotions (higher VA ratings). Certain individual factors (musical ability, familiarity, preference) also seem to impact the perception of arousal and valence while viewing films. Our online emotion annotation interface was on overall well received and suggestions to improve the display of reference emotion tags are discussed

    Audio Features Affected by Music Expressiveness

    Full text link
    Within a Music Information Retrieval perspective, the goal of the study presented here is to investigate the impact on sound features of the musician's affective intention, namely when trying to intentionally convey emotional contents via expressiveness. A preliminary experiment has been performed involving 1010 tuba players. The recordings have been analysed by extracting a variety of features, which have been subsequently evaluated by combining both classic and machine learning statistical techniques. Results are reported and discussed.Comment: Submitted to ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016), Pisa, Italy, July 17-21, 201

    A Web Application for Audience Participation in Live Music Performance: The Open Symphony Use Case

    Get PDF
    This paper presents a web-based application enabling au- diences to collaboratively contribute to the creative pro- cess during live music performances. The system aims at enhancing audience engagement and creating new forms of live music experiences. Interaction between audience and performers is made possible through a client/server architecture enabling bidirectional communication of cre- ative data. Audience members can vote for pre-determined musical attributes using a smartphone-friendly and cross- platform web application. The system gathers audience members' votes and provide feedback through visualisations that can be tailored for speci c needs. In order to sup- port multiple performers and large audiences, automatic audience-to-performer groupings are handled by the appli- cation. The framework was applied to support live interac- tive musical improvisations where creative roles are shared amongst audience and performers (Open Symphony). Qual- itative analyses of user surveys highlighted very positive feedback related to themes such as engagement and cre- ativity and also identi ed further design challenges around audience sense of control and latency

    A Participatory Live Music Performance with the Open Symphony System

    Get PDF
    Our Open Symphony system reimagines the music experience for a digital age, fostering alliances between performer and audience and our digital selves. Open Symphony enables live participatory music performance where the audience actively engages in the music creation process. This is made possible by using stateof- the-art web technologies and data visualisation techniques. Through collaborations with local performers we will conduct a series of interactive music performance revolutionizing the performance experience both for performers and audiences. The system throws open music-creating possibilities to every participant and is a genuine novel way to demonstrate the field of Human Computer Interaction through computer-supported cooperative creation and multimodal music and visual perception

    DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models

    Get PDF
    Originating in the Renaissance and burgeoning in the digital era, tablatures are a commonly used music notation system which provides explicit representations of instrument fingerings rather than pitches. GuitarPro has established itself as a widely used tablature format and software enabling musicians to edit and share songs for musical practice, learning, and composition. In this work, we present DadaGP, a new symbolic music dataset comprising 26,181 song scores in the GuitarPro format covering 739 musical genres, along with an accompanying tokenized format well-suited for generative sequence models such as the Transformer. The tokenized format is inspired by event-based MIDI encodings, often used in symbolic music generation models. The dataset is released with an encoder/decoder which converts GuitarPro files to tokens and back. We present results of a use case in which DadaGP is used to train a Transformer-based model to generate new songs in GuitarPro format. We discuss other relevant use cases for the dataset (guitar-bass transcription, music style transfer and artist/genre classification) as well as ethical implications. DadaGP opens up the possibility to train GuitarPro score generators, fine-tune models on custom data, create new styles of music, AI-powered songwriting apps, and human-AI improvisation

    Learning Combinations of Multiple Feature Representations for Music Emotion Prediction

    Get PDF
    Music consists of several structures and patterns evolving through time which greatly influences the human decoding of higher-level cognitive aspects of music like the emotions expressed in music. For tasks, such as genre, tag and emotion recognition, these structures have often been identified and used as individual and non-temporal features and representations. In this work, we address the hypothesis whether using multiple temporal and non-temporal representations of different features is beneficial for modeling music structure with the aim to predict the emotions expressed in music. We test this hypothesis by representing temporal and non-temporal structures using generative models of multiple audio features. The representations are used in a discriminative setting via the Product Probability Kernel and the Gaussian Process model enabling Multiple Kernel Learning, finding optimized combinations of both features and temporal/ non-temporal representations. We show the increased predictive performance using the combination of different features and representations along with the great interpretive prospects of this approach

    DRAM-1 is required for mTORC1 activation by facilitating lysosomal amino acid efflux

    Get PDF
    Sensing nutrient availability is essential for appropriate cellular growth, and mTORC1 is a major regulator of this process. Mechanisms causing mTORC1 activation are, however, complex and diverse. We report here an additional important step in the activation of mTORC1, which regulates the efflux of amino acids from lysosomes into the cytoplasm. This process requires DRAM-1, which binds the membrane carrier protein SCAMP3 and the amino acid transporters SLC1A5 and LAT1, directing them to lysosomes and permitting efficient mTORC1 activation. Consequently, we show that loss of DRAM-1 also impacts pathways regulated by mTORC1, including insulin signaling, glycemic balance, and adipocyte differentiation. Interestingly, although DRAM-1 can promote autophagy, this effect on mTORC1 is autophagy independent, and autophagy only becomes important for mTORC1 activation when DRAM-1 is deleted. These findings provide important insights into mTORC1 activation and highlight the importance of DRAM-1 in growth control, metabolic homeostasis, and differentiation

    The reg4 Gene, Amplified in the Early Stages of Pancreatic Cancer Development, Is a Promising Therapeutic Target

    Get PDF
    BACKGROUND: The aim of our work was to identify the genes specifically altered in pancreatic adenocarcinoma and especially those that are altered early in cancer development. METHODOLOGY/PRINCIPAL FINDINGS: Gene copy number was systematically assessed with an ultra-high resolution CGH oligonucleotide microarray in DNA from samples of pancreatic cancer. Several new cancer-associated variations were observed. In this work we focused on one of them, involving the reg4 gene. Gene copy number gain of the reg4 gene was confirmed by qPCR in 14 cancer samples. It was also found with increased copy number in most PanIN3 samples. The relationship betweena gain in reg4 gene copy number and cancer development was investigated on the human pancreatic cancer cell line Mia-PaCa2 xenografted under the skin of nude mice. When cells were transfected with a vector allowing reg4 expression, they generated tumors almost twice larger in size. In addition, these tumors were more resistant to gemcitabine treatment than control tumors. Interestingly, weekly intraperitoneal administration of a monoclonal antibody to reg4 halved the size of tumors generated by Mia-PaCa2 cells, suggesting that the antibody interfered with a paracrine/autocrine mechanism involving reg4 and stimulating cancer progression. The addition of gemcitabine resulted in further reduction, tumors becoming 5 times smaller than control. Exposure to reg4 antibody resulted in a significant decrease in intra-tumor levels of pAkt, Bcl-xL, Bcl-2, survivin and cyclin D1. CONCLUSIONS/SIGNIFICANCE: It was concluded that adjuvant therapies targeting reg4 could improve the standard treatment of pancreatic cancer with gemcitabine
    • …
    corecore