125,117 research outputs found

    A wireless, real-time, social music performance system for mobile phones

    Get PDF
    The paper reports on the Cellmusic system: a real-time, wireless distributed composition and performance system designed for domestic mobile devices. During a performance, each mobile device communicates with others, and may create sonic events in a passive (non interactive) mode or may influence the output of other devices. Cellmusic distinguishes itself from other mobile phone performance environments in that it is intended for performance in ad hoc locations, with services and performances automatically and dynamically adapting to the number of devices within a given proximity. It is designed to run on a number of mobile phone platforms to allow as wider distribution as possible, again distinguishing itself from other mobile performance systems which primarily run on a single device. Rather than performances being orchestrated or managed, it is intended that users will access it and create a performance in the same manner that they use mobile phones for interacting socially at different times throughout the day. However, this does not preclude the system being used in a more traditional performance environment. This accessibility and portability make it an ideal platform for sonic artists who choose to explore a variety of physical environments (such as parks and other public spaces)

    Music Generation by Deep Learning - Challenges and Directions

    Full text link
    In addition to traditional tasks such as prediction, classification and translation, deep learning is receiving growing attention as an approach for music generation, as witnessed by recent research groups such as Magenta at Google and CTRL (Creator Technology Research Lab) at Spotify. The motivation is in using the capacity of deep learning architectures and training techniques to automatically learn musical styles from arbitrary musical corpora and then to generate samples from the estimated distribution. However, a direct application of deep learning to generate content rapidly reaches limits as the generated content tends to mimic the training set without exhibiting true creativity. Moreover, deep learning architectures do not offer direct ways for controlling generation (e.g., imposing some tonality or other arbitrary constraints). Furthermore, deep learning architectures alone are autistic automata which generate music autonomously without human user interaction, far from the objective of interactively assisting musicians to compose and refine music. Issues such as: control, structure, creativity and interactivity are the focus of our analysis. In this paper, we select some limitations of a direct application of deep learning to music generation, analyze why the issues are not fulfilled and how to address them by possible approaches. Various examples of recent systems are cited as examples of promising directions.Comment: 17 pages. arXiv admin note: substantial text overlap with arXiv:1709.01620. Accepted for publication in Special Issue on Deep learning for music and audio, Neural Computing & Applications, Springer Nature, 201
    • …
    corecore