17,759 research outputs found
Musical instrument mapping design with Echo State Networks
Echo State Networks (ESNs), a form of recurrent neural network developed in the field of Reservoir Computing, show significant potential for use as a tool in the design of mappings for digital musical instruments. They have, however, seldom been used in this area, so this paper explores their possible applications. This project contributes a new open source library, which was developed to allow ESNs to run in the Pure Data dataflow environment. Several use cases were explored, focusing on addressing current issues in mapping research. ESNs were found to work successfully in scenarios of pattern classification, multiparametric control, explorative mapping and the design of nonlinearities and uncontrol. 'Un-trained' behaviours are proposed, as augmentations to the conventional reservoir system that allow the player to introduce potentially interesting non-linearities and uncontrol into the reservoir. Interactive evolution style controls are proposed as strategies to help design these behaviours, which are otherwise dependent on arbitrary values and coarse global controls. A study on sound classification showed that ESNs could reliably differentiate between two drum sounds, and also generalise to other similar input. Following evaluation of the use cases, heuristics are proposed to aid the use of ESNs in computer music scenarios
ProSpect: Expanded Conditioning for the Personalization of Attribute-aware Image Generation
Personalizing generative models offers a way to guide image generation with
user-provided references. Current personalization methods can invert an object
or concept into the textual conditioning space and compose new natural
sentences for text-to-image diffusion models. However, representing and editing
specific visual attributes like material, style, layout, etc. remains a
challenge, leading to a lack of disentanglement and editability. To address
this, we propose a novel approach that leverages the step-by-step generation
process of diffusion models, which generate images from low- to high-frequency
information, providing a new perspective on representing, generating, and
editing images. We develop Prompt Spectrum Space P*, an expanded textual
conditioning space, and a new image representation method called ProSpect.
ProSpect represents an image as a collection of inverted textual token
embeddings encoded from per-stage prompts, where each prompt corresponds to a
specific generation stage (i.e., a group of consecutive steps) of the diffusion
model. Experimental results demonstrate that P* and ProSpect offer stronger
disentanglement and controllability compared to existing methods. We apply
ProSpect in various personalized attribute-aware image generation applications,
such as image/text-guided material/style/layout transfer/editing, achieving
previously unattainable results with a single image input without fine-tuning
the diffusion models
DMRN+16: Digital Music Research Network One-day Workshop 2021
DMRN+16: Digital Music Research Network One-day Workshop 2021 Queen Mary University of London Tuesday 21st December 2021 Keynote speakers Keynote 1. Prof. Sophie Scott -Director, Institute of Cognitive Neuroscience, UCL. Title: "Sound on the brain - insights from functional neuroimaging and neuroanatomy" Abstract In this talk I will use functional imaging and models of primate neuroanatomy to explore how sound is processed in the human brain. I will demonstrate that sound is represented cortically in different parallel streams. I will expand this to show how this can impact on the concept of auditory perception, which arguably incorporates multiple kinds of distinct perceptual processes. I will address the roles that subcortical processes play in this, and also the contributions from hemispheric asymmetries. Keynote 2: Prof. Gus Xia - Assistant Professor at NYU Shanghai Title: "Learning interpretable music representations: from human stupidity to artificial intelligence" Abstract Gus has been leading the Music X Lab in developing intelligent systems that help people better compose and learn music. In this talk, he will show us the importance of music representation for both humans and machines, and how to learn better music representations via the design of inductive bias. Once we got interpretable music representations, the potential applications are limitless
StyleAdapter: A Single-Pass LoRA-Free Model for Stylized Image Generation
This paper presents a LoRA-free method for stylized image generation that
takes a text prompt and style reference images as inputs and produces an output
image in a single pass. Unlike existing methods that rely on training a
separate LoRA for each style, our method can adapt to various styles with a
unified model. However, this poses two challenges: 1) the prompt loses
controllability over the generated content, and 2) the output image inherits
both the semantic and style features of the style reference image, compromising
its content fidelity. To address these challenges, we introduce StyleAdapter, a
model that comprises two components: a two-path cross-attention module (TPCA)
and three decoupling strategies. These components enable our model to process
the prompt and style reference features separately and reduce the strong
coupling between the semantic and style information in the style references.
StyleAdapter can generate high-quality images that match the content of the
prompts and adopt the style of the references (even for unseen styles) in a
single pass, which is more flexible and efficient than previous methods.
Experiments have been conducted to demonstrate the superiority of our method
over previous works.Comment: AIG
- âŠ