2,849 research outputs found

    Surfing the Waves: Live Audio Mosaicing of an Electric Bass Performance as a Corpus Browsing Interface

    Get PDF
    In this paper, the authors describe how they use an electric bass as a subtle, expressive and intuitive interface to browse the rich sample bank available to most laptop owners. This is achieved by audio mosaicing of the live bass performance audio, through corpus-based concatenative synthesis (CBCS) techniques, allowing a mapping of the multi-dimensional expressivity of the performance onto foreign audio material, thus recycling the virtuosity acquired on the electric instrument with a trivial learning curve. This design hypothesis is contextualised and assessed within the Sandbox#n series of bass+laptop meta-instruments, and the authors describe technical means of the implementation through the use of the open-source CataRT CBCS system adapted for live mosaicing. They also discuss their encouraging early results and provide a list of further explorations to be made with that rich new interface

    ‘In the game’? Embodied subjectivity in gaming environments

    Get PDF
    Human-computer interactions are increasingly using more (or all) of the body as a control device. We identify a convergence between everyday bodily actions and activity within digital environments, and a trend towards incorporating natural or mimetic form of movement into gaming devices. We go on to reflect on the nature of player ‘embodiment’ in digital gaming environments by applying insights from the phenomenology of Maurice Merleau-Ponty. Three conditions for digital embodiment are proposed, with implications for Calleja’s (2011) Player Involvement Model (PIM) of gaming discussed

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Descriptor driven concatenative synthesis tool for Python

    Get PDF
    A command-line tool and Python framework is proposed for the exploration of a new form of audio synthesis known as ‘concatenative synthesis’, a form of synthesis that uses perceptual audio analyses to arrange small segments of audio based on their characteristics. The tool is designed to synthesise representations of an input target sound using a source database of sounds. This involves the segmentation and analysis of both the input sound and database, the matching of input segments to their closest segment from the database, and the resynthesis of the closest matches to produce the final result. The project aims to provide a tool capable of generating high-quality sonic representations of an input, to present a variety of examples that demonstrated the breadth of possibilities that this style of synthesis has to offer and to provide a robust framework on which concatenative synthesis projects can be developed easily. The purpose of this project was primarily to highlight the potential for further development in the area of concatenative synthesis, and to provide a simple and intuitive tool that could be used by composers for sound design and experimentation. The breadth of possibilities for creating new sounds offered by this method of synthesis makes it ideal for digital sound design and electroacoustic composition. Results demonstrate the wide variety of sounds that can be produced using this method of synthesis. A number of technical issues are outlined that impeded the overall quality of results and efficiency of the software. However, the project clearly demonstrates the strong potential for this type of synthesis to be used for creative purposes

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    Memorization of Named Entities in Fine-tuned BERT Models

    Full text link
    Privacy preserving deep learning is an emerging field in machine learning that aims to mitigate the privacy risks in the use of deep neural networks. One such risk is training data extraction from language models that have been trained on datasets, which contain personal and privacy sensitive information. In our study, we investigate the extent of named entity memorization in fine-tuned BERT models. We use single-label text classification as representative downstream task and employ three different fine-tuning setups in our experiments, including one with Differentially Privacy (DP). We create a large number of text samples from the fine-tuned BERT models utilizing a custom sequential sampling strategy with two prompting strategies. We search in these samples for named entities and check if they are also present in the fine-tuning datasets. We experiment with two benchmark datasets in the domains of emails and blogs. We show that the application of DP has a detrimental effect on the text generation capabilities of BERT. Furthermore, we show that a fine-tuned BERT does not generate more named entities specific to the fine-tuning dataset than a BERT model that is pre-trained only. This suggests that BERT is unlikely to emit personal or privacy sensitive named entities. Overall, our results are important to understand to what extent BERT-based services are prone to training data extraction attacks.Comment: accepted at CD-MAKE 202

    My Physical Approach to Musique Concrete Composition Portfolio of Studio Works

    Get PDF
    My recent practice-based research explores the creative potential of physical manipulation of sound in the composition of sound-based electronic music. Focusing on the poietic aspect of my music making, this commentary discusses the composition process of three musical works: Comme si la foudre pouvait durer, Igaluk - To Scare the Moon with its own Shadow and desert. It also examines the development of a software instrument, fXfD, along with its resulting musical production. Finally, it discusses the recent musical production of an improvisation duet in which I take part, Tout Croche. In the creative process of this portfolio, the appreciation for sound is the catalyst of the musical decisions. In other words, the term \musique concrete" applies to my practice, as sound is the central concern that triggers the composition act. In addition to anecdotal, typo-morphological and functional concerns, the presence of a \trace of physicality" in a sound is, more than ever, what convinces me of its musical potential. In order to compose such sounds, a back-and-forth process between theoretical knowledge and sound manipulations will be defined and developed under the concept of \sonic empiricism." In a desire to break with the cumbersome nature of studio-based composition work, approaches to sound-based electronic music playing were researched. Through the diferent musical projects, various digital instruments were conceived. In a case study, the text reviews them through their sound generation, gestural control and mapping components. I will also state personal preferences in the ways sound manipulations are performed. In the light of the observations made, the studio emerges as the central instrument upon which my research focuses. The variety of resources it provides for the production and control of sound confers the status of polymorphic instrument on the studio. The text concludes by reflecting on the possibilities of improvisation and performance that the studio offers when it is considered as an embodied polymorphic instrument. A concluding statement on the specific ear training needed for such a studio practice bridges the concepts of sound selection and digital instruments herein exposed

    A Functional Taxonomy of Music Generation Systems

    Get PDF
    Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.Comment: survey, music generation, taxonomy, functional survey, survey, automatic composition, algorithmic compositio
    corecore