3,956 research outputs found

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    A Review of Intelligent Music Generation Systems

    Full text link
    With the introduction of ChatGPT, the public's perception of AI-generated content (AIGC) has begun to reshape. Artificial intelligence has significantly reduced the barrier to entry for non-professionals in creative endeavors, enhancing the efficiency of content creation. Recent advancements have seen significant improvements in the quality of symbolic music generation, which is enabled by the use of modern generative algorithms to extract patterns implicit in a piece of music based on rule constraints or a musical corpus. Nevertheless, existing literature reviews tend to present a conventional and conservative perspective on future development trajectories, with a notable absence of thorough benchmarking of generative models. This paper provides a survey and analysis of recent intelligent music generation techniques, outlining their respective characteristics and discussing existing methods for evaluation. Additionally, the paper compares the different characteristics of music generation techniques in the East and West as well as analysing the field's development prospects

    Toward Interactive Music Generation: A Position Paper

    Get PDF
    Music generation using deep learning has received considerable attention in recent years. Researchers have developed various generative models capable of imitating musical conventions, comprehending the musical corpora, and generating new samples based on the learning outcome. Although the samples generated by these models are persuasive, they often lack musical structure and creativity. For instance, a vanilla end-to-end approach, which deals with all levels of music representation at once, does not offer human-level control and interaction during the learning process, leading to constrained results. Indeed, music creation is a recurrent process that follows some principles by a musician, where various musical features are reused or adapted. On the other hand, a musical piece adheres to a musical style, breaking down into precise concepts of timbre style, performance style, composition style, and the coherency between these aspects. Here, we study and analyze the current advances in music generation using deep learning models through different criteria. We discuss the shortcomings and limitations of these models regarding interactivity and adaptability. Finally, we draw the potential future research direction addressing multi-agent systems and reinforcement learning algorithms to alleviate these shortcomings and limitations

    Brain Computer Interfaces for inclusion

    Get PDF
    All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    Interaction Design for Digital Musical Instruments

    Get PDF
    The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model. Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon: 1. The accepted design conventions of the hardware in use 2. Established musical systems, acoustic or digital 3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests This thesis proposes an alternate way to approach the design of digital musical instrument behaviour – examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware. This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician

    A Core Reference Hierarchical Primitive Ontology for Electronic Medical Records Semantics Interoperability

    Get PDF
    Currently, electronic medical records (EMR) cannot be exchanged among hospitals, clinics, laboratories, pharmacies, and insurance providers or made available to patients outside of local networks. Hospital, laboratory, pharmacy, and insurance provider legacy databases can share medical data within a respective network and limited data with patients. The lack of interoperability has its roots in the historical development of electronic medical records. Two issues contribute to interoperability failure. The first is that legacy medical record databases and expert systems were designed with semantics that support only internal information exchange. The second is ontological commitment to the semantics of a particular knowledge representation language formalism. This research seeks to address these interoperability failures through demonstration of the capability of a core reference, hierarchical primitive ontological architecture with concept primitive attributes definitions to integrate and resolve non-interoperable semantics among and extend coverage across existing clinical, drug, and hospital ontologies and terminologies

    Planar Refrains

    Get PDF
    My practice explores phenomenal poetic truths that exist in fissures between the sensual and physical qualities of material constructs. Magnifying this confounding interspace, my work activates specific instruments within mutable, relational systems of installation, movement, and documentation. The tools I fabricate function within variable orientations and are implemented as both physical barriers and thresholds into alternate, virtual domains. Intersecting fragments of sound and moving image build a nexus of superimposed spatialities, while material constructions are enveloped in ephemeral intensities. Within this compounded environment, both mind and body are charged as active sites through which durational, contemplative experiences can pass. Reverberation, the ghostly refrain of a sound calling back to our ears from a distant plane, can intensify our emotional experience of place. My project Planar Refrains utilizes four electro-mechanical reverb plates, analog audio filters designed to simulate expansive acoustic arenas. Historically these devices have provided emotive voicings to popular studio recordings, dislocating the performer from the commercial studio and into a simulated reverberant territory of mythic proportions. The material resonance of steel is used to filter a recorded signal, shaping the sound of a human performance into something more transformative, a sound embodying otherworldly dynamics. In subverting the designed utility of reverb plates, I am exploring their value as active surfaces extending across different spatial realities. The background of ephemeral sonic residue is collapsed into the foreground, a filter becomes sculpture, and this sculpture becomes an instrument in an evolving soundscape

    MMixte: a software architecture for Live Electronics with acoustic instruments : exemplary application cases

    Get PDF
    MMixte is a middleware based on Max for mixed music with live electronics. It enables programming for a “patcher concerto”, a platform, that is, for the management of live electronics in just a few minutes and with extreme simplicity. Dedicated to average and expert users, MMixte enables true programming of live electronics in very little time while also enabling easy adapting of previously developed modules, depending on the case and its needs. The architecture behind MMixte is based on a variation of so-called “pipeline architecture"; the analysis of the most widely used software architectures in the market and design patterns to program graphic interfaces has led to the conception of ways of organizing communication between various modules, the way they are being used and their graphic appearence. Analysis of other, “state of the art” module collections and other software programs dedicated to mixed music shows the absence of another work on software architecture for mixed music. Application of MMixte to some of my personal works shows demonstrates its flexibility and ease of adaptation. Computer programming for a piece of mixed music requires much that goes beyond just programming of audio signal processing. The present work seeks to provide an example of a solution to such needs
    • …
    corecore