1,172 research outputs found

    A Symbolic Sonification of L-Systems

    Full text link
    This paper describes a simple technique for the sonification of branching structures in plants. The example is intended to illustrate a qualitative definition of best practices for sonification aimed at the production of musical material. Visually manifest results of tree growth are modelled and subsequently mapped to pitch, time, and amplitude. Sample results are provided in symbolic music notation

    Bodytext essay

    Get PDF

    Ars Informatica -- Ars Electronica: Improving Sonification Aesthetics

    Get PDF
    In this paper we discuss æsthetic issues of sonifications. We posit that many sonifications have suffered from poor acoustic ecology which makes listening more difficult, thereby resulting in poorer data extraction and inference on the part of the listener. Lessons are drawn from the electro acoustic music community as we argue that it is not instructive to distinguish between sonifications and music/sound art. Edgar Var`ese defined music as organised sound and sonifications organise sound to reflect some aspect of the thing being sonified. Therefore, we propose that sonification designers can improve the communicative ability of their auditory displays by paying attention to the æsthetic issues that are well known to composers, orchestrators, sound designers & artists, and recording engineers

    The Ambient Horn: Designing a novel audio-based learning experience

    Get PDF
    The Ambient Horn is a novel handheld device designed to support children learning about habitat distributions and interdependencies in an outdoor woodland environment. The horn was designed to emit non-speech audio sounds representing ecological processes. Both symbolic and arbitrary mappings were used to represent the processes. The sounds are triggered in response to the children’s location in certain parts of the woodland. A main objective was to provoke children into interpreting and reflecting upon the significance of the sounds in the context in which they occur. Our study of the horn being used showed the sounds to be provocative, generating much discussion about what they signified in relation to what the children saw in the woodland. In addition, the children appropriated the horn in creative ways, trying to ‘scoop’ up new sounds as they walked in different parts of the woodland

    Sonification, Musification, and Synthesis of Absolute Program Music

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)When understood as a communication system, a musical work can be interpreted as data existing within three domains. In this interpretation an absolute domain is interposed as a communication channel between two programatic domains that act respectively as source and receiver. As a source, a programatic domain creates, evolves, organizes, and represents a musical work. When acting as a receiver it re-constitutes acoustic signals into unique auditory experience. The absolute domain transmits physical vibrations ranging from the stochastic structures of noise to the periodic waveforms of organized sound. Analysis of acoustic signals suggest recognition as a musical work requires signal periodicity to exceed some minimum. A methodological framework that satisfies recent definitions of sonification is outlined. This framework is proposed to extend to musification through incorporation of data features that represent more traditional elements of a musical work such as melody, harmony, and rhythm

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale

    Human Pattern Recognition in Data Sonification

    Get PDF
    Computational music analysis investigates the relevant features required for the detection and classification of musical content, features which do not always directly overlap with musical composition concepts. Human perception of music is also an active area of research, with existing work considering the role of perceptual schema in musical pattern recognition. Data sonification investigates the use of non-speech audio to convey information, and it is in this context that some potential guidelines for human pattern recognition are presented for discussion in this paper. Previous research into the role of musical contour (shape) in data sonification shows that it has a significant impact on pattern recognition performance, whilst investigation in the area of rhythmic parsing made a significant difference in performance when used to build structures in data sonifications. The paper presents these previous experimental results as the basis for a discussion around the potential for inclusion of schema- based classifiers in computational music analysis, considering where shape and rhythm classification may be employed at both the segmental and supra-segmental levels to better mimic the human process of perception
    corecore