91 research outputs found
The Sound of the Smell (and taste) of my Shoes too: Mapping the Senses using Emotion as a Medium
This work discusses basic human senses: sight; sound; touch; taste; and smell; and the way in which it may be possible to compensate for lack of one, or more, of these by explicitly representing stimuli using the remaining senses. There may be many situations or scenarios where not all five of these base senses are being stimulated, either because of an optional restriction or deficit or because of a physical or sensory impairment such as loss of sight or touch sensation. Related to this there are other scenarios where sensory matching problems may occur. For example: a user immersed in a virtual environment may have a sense of smell from the real world that is unconnected to the virtual world. In particular, this paper is concerned with how sound can be used to compensate for the lack of other sensory stimulation and vice-versa. As a link is well established already between the visual, touch, and auditory systems, more attention is given to taste and smell, and their relationship with sound. This work presents theoretical concepts, largely oriented around mapping other sensory qualities to sound, based upon existing work in the literature and emerging technologies, to discuss where particular gaps currently exist, how emotion could be a medium to cross-modal representations, and how these might be addressed in future research. It is postulated that descriptive qualities, such as timbre or emotion, are currently the most viable routes for further study and that this may be later integrated with the wider body of research into sensory augmentation
Recommended from our members
Worship the penguin: adventures with sprites, chiptunes, and lasers
This paper provides a review of recent projects developed through the author’s reative practice and activities across multiple computing and games technologies platforms. These include: a 2D game project made in Unity; an Arduino-based laser puzzle; chiptune breakbeat music made on a Commodore 64; the archival of a collection of Amiga demoscene disks; PETSCII graphics; a controller adapter for the Amiga; and a DJ/VJ performance. While playfully exploring new trajectories, these projects broadly reflect on-going themes present in the author's previous work, such as explorations of the aesthetic paradigms presented by vintage computers, 1990s rave culture, and synaesthesia. The paper will address the various challenges and methodologies used to realise these projects; pedagogical considerations; and the pandemic context in which they have been created and presented
Recommended from our members
Quake Delirium revisited: system for video game ASC simulations
This paper reviews the conceptual model devised for a previous project, where Max/MSP was used to modify the game Quake to create a more psychedelic experience for the player. In this original proof of concept, various available game parameters were animated in order to imitate perceptual distortions of the type produced by hallucinogenic drugs. A MIDI mixing console was used to ‘remix’ these perceptual distortions in real-time, and devise pre-defined sequences of hallucination. The control parameters were also used to manipulate a corresponding soundtrack, which was intended to reflect the hallucinatory game experience through electroacoustic sound. This paper outlines the existing proof-of-concept, and considers the development of this model to create a more sophisticated system for use in game engines such as Unity. Through the use of Hobson’s ‘Activation, Input, Modulation’ (AIM) model of consciousness, I will propose a cohesive system for creating ASC simulations in video games
Recommended from our members
Nausea: an approach to sonic arts composition based on ASC
This paper concerns research in the field of compositional methods for electroacoustic music. I discuss the compositional approach used for creating ‘Nausea’: a large-scale work of electroacoustic music presented in surround sound. The piece is part of a larger body of creative work in sonic arts carried out as part of the author’s PhD research. These works explore the use of altered states of consciousness (ASC) as a basis for the design of sonic materials and structure. Sounds are created to reflect aspects of a hypothetical psychedelic experience, such as visual patterning effects or hallucinated entities. These sounds are then arranged in a manner that suitably reflects the progression of a typical psychedelic experience. Through discussion of the compositional methodology used, it is intended to demonstrate how ASC can be used to inform the design of sonic artworks. It is anticipated that this research will also contribute more generally to knowledge of possible approaches for the design of digital artworks that represent ASC. The emphasis of this paper is on the compositional process, and does not attempt to measure audience response to the music. Similarly, the process described should be seen as appropriate, but not absolute; implementations of this method involving slightly different subjective artistic judgements would be possible within the general framework discussed
Recommended from our members
Bass drum, saxophone and laptop: real-time psychedelic performance software
Taking a performance by Z’EV and John Zorn as an inspirational starting point, Bass Drum, Sax & Laptop is a piece of software designed with Max/MSP which facilitates improvisational real-time performance for live instruments and electronics. This software and the music produced with it are a continuation of my research regarding compositional techniques that elicit altered states of consciousness. DSP effects are incorporated which process the live instruments, while a sampling module, the “atomizer”, produces sound which is mimetic of visual patterns of hallucination. An integral feature of the software is the ability to automate control parameters temporally so that they respond to the live performance. This facilitates a system of interactivity in which the performers respond to the software and vice-versa. The resulting spontaneous interactions and temporally shifting effects are intended to create an analogy between the sounds produced and the complex biological processes which produce dreams and hallucinations. In this article I will discuss the development of the software and its realisation in performance with Sol Nte on saxophone and myself on bass drum
Recommended from our members
Synaesthetic audio-visual sound toys in virtual reality
This paper discusses the design of audio-visual sound toys in Cyberdream, a virtual reality music visualization. While an earlier version of this project for Oculus GearVR provided a journey through audio-visual environments related to 1990s rave culture, the most recent iteration for Oculus Quest provides the addition of three audio-visual sound toys, the discussion of which is the main focus of this paper. In the latest version, the user flies through synaesthetic environments, while using the interactive controllers to manipulate the audio-visual sound toys and 'paint with sound'. These toys allow the user to playfully manipulate sound and image in a way that is complementary to, and interfaces with, the audio-visual backdrop provided by the VR music visualization. Through the discussion of novel approaches to design, the project informs new strategies in the field of VR music visualizations
Representing altered states of consciousness in computer arts
It has been proposed that among the earliest known artworks produced by humans may have been representations of altered states of consciousness (ASCs). With the advent of modern computer technology that enables the creation of almost any sound or image imaginable, the possibility of representing the subjective visual and aural components of hallucinatory experiences with increased realism emerges. In order to consider how these representations could be created, this paper provides a discussion of existing work that represents ASCs. I commence by providing an overview of ASCs and a brief history of their use in culture. This provides the necessary background through which we may then consider the variety of art and music that represents ASCs, including: shamanic art and music, modern visual art, popular music, film and video games. Through discussion of the ways in which these examples represent ASC, a concept of ‘ASC Simulation’ is proposed, which emphasises realistic representations of ASCs. The paper concludes with a brief summary of several creative projects in computer ,usic and arts that explore this area
High-Level Analysis of Audio Features for Identifying Emotional Valence in Human Singing
Emotional analysis continues to be a topic that receives much attention in the audio and music community. The potential to link together human affective state and the emotional content or intention of musical audio has a variety of application areas in fields such as improving user experience of digital music libraries and music therapy. Less work has been directed into the emotional analysis of human acapella singing. Recently, the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) was released, which includes emotionally validated human singing samples. In this work, we apply established audio analysis features to determine if these can be used to detect underlying emotional valence in human singing. Results indicate that the short-term audio features of: energy; spectral centroid (mean); spectral centroid (spread); spectral entropy; spectral flux; spectral rolloff; and fundamental frequency can be useful predictors of emotion, although their efficacy is not consistent across positive and negative emotions
Supervised machine learning for audio emotion recognition: Enhancing film sound design using audio features, regression models and artificial neural networks
This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/s00779-020-01389-0The field of Music Emotion Recognition has become and established research sub-domain of Music Information Retrieval. Less attention has been directed towards the counterpart domain of Audio Emotion Recognition, which focuses upon detection of emotional stimuli resulting from non-musical sound. By better understanding how sounds provoke emotional responses in an audience, it may be possible to enhance the work of sound designers. The work in this paper uses the International Affective Digital Sounds set. A total of 76 features are extracted from the sounds, spanning the time and frequency domains. The features are then subjected to an initial analysis to determine what level of similarity exists between pairs of features measured using Pearson’s r correlation coefficient before being used as inputs to a multiple regression model to determine their weighting and relative importance. The features are then used as the input to two machine learning approaches: regression modelling and artificial neural networks in order to determine their ability to predict the emotional dimensions of arousal and valence. It was found that a small number of strong correlations exist between the features and that a greater number of features contribute significantly to the predictive power of emotional valence, rather than arousal. Shallow neural networks perform significantly better than a range of regression models and the best performing networks were able to account for 64.4% of the variance in prediction of arousal and 65.4% in the case of valence. These findings are a major improvement over those encountered in the literature. Several extensions of this research are discussed, including work related to improving data sets as well as the modelling processes
Enhancing film sound design using audio features, regression models and artificial neural networks
This is an Accepted Manuscript of an article published by Taylor & Francis in Journal of New Music Research on 21/09/2021, available online: https://doi.org/10.1080/09298215.2021.1977336Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) identified audio features that map to arousal and valence for 20 songs. From this, we constructed a set of linear regressors. The second study (n = 158) measured the efficacy of our system, utilising 40 new songs to create a ground truth. Results show our approach may be effective at emotionally rating music, particularly in the prediction of valence
- …