14,709 research outputs found
Basic gestures as spatiotemporal reference frames for repetitive dance/music patterns in samba and charleston
THE GOAL OF THE PRESENT STUDY IS TO GAIN BETTER insight into how dancers establish, through dancing, a spatiotemporal reference frame in synchrony with musical cues. With the aim of achieving this, repetitive dance patterns of samba and Charleston were recorded using a three-dimensional motion capture system. Geometric patterns then were extracted from each joint of the dancer's body. The method uses a body-centered reference frame and decomposes the movement into non-orthogonal periodicities that match periods of the musical meter. Musical cues (such as meter and loudness) as well as action-based cues (such as velocity) can be projected onto the patterns, thus providing spatiotemporal reference frames, or 'basic gestures,' for action-perception couplings. Conceptually speaking, the spatiotemporal reference frames control minimum effort points in action-perception couplings. They reside as memory patterns in the mental and/or motor domains, ready to be dynamically transformed in dance movements. The present study raises a number of hypotheses related to spatial cognition that may serve as guiding principles for future dance/music studies
The sound motion controller: a distributed system for interactive music performance
We developed an interactive system for music performance, able to
control sound parameters in a responsive way with respect to the
user’s movements. This system is conceived as a mobile application,
provided with beat tracking and an expressive parameter modulation,
interacting with motion sensors and effector units, which are
connected to a music output, such as synthesizers or sound effects.
We describe the various types of usage of our system and our
achievements, aimed to increase the expression of music
performance and provide an aid to music interaction. The results
obtained outline a first level of integration and foresee future
cognitive and technological research related to it
Seeing with sound? Exploring different characteristics of a visual-to-auditory sensory substitution device
Sensory substitution devices convert live visual images into auditory signals, for example with a web camera (to record the images), a computer (to perform the conversion) and headphones (to listen to the sounds). In a series of three experiments, the performance of one such device (‘The vOICe’) was assessed under various conditions on blindfolded sighted participants. The main task that we used involved identifying and locating objects placed on a table by holding a webcam (like a flashlight) or wearing it on the head (like a miner’s light). Identifying objects on a table was easier with a hand-held device, but locating the objects was easier with a head-mounted device. Brightness converted into loudness was less effective than the reverse contrast (dark being loud), suggesting that performance under these conditions (natural indoor lighting, novice users) is related more to the properties of the auditory signal (ie the amount of noise in it) than the cross-modal association between loudness and brightness. Individual differences in musical memory (detecting pitch changes in two sequences of notes) was related to the time taken to identify or recognise objects, but individual differences in self-reported vividness of visual imagery did not reliably predict performance across the experiments. In general, the results suggest that the auditory characteristics of the device may be more important for initial learning than visual associations
Modelling Methods for the Highly Dispersive Slinky Spring: A Novel Musical Toy
ABSTRACT The 'Slinky' spring is a popular and beloved toy for many children. Like its smaller relatives, used in spring reverberation units, it can produce interesting sonic behaviors. We explore the behavior of the 'Slinky' spring via measurement, and discover that its sonic characteristics are notably different to those of smaller springs. We discuss methods of modeling the behavior of a Slinky via the use of finite-difference techniques and digital waveguides. We then apply these models in different structures to build a number of interesting tools for computer-based music production
A Feature Learning Siamese Model for Intelligent Control of the Dynamic Range Compressor
In this paper, a siamese DNN model is proposed to learn the characteristics
of the audio dynamic range compressor (DRC). This facilitates an intelligent
control system that uses audio examples to configure the DRC, a widely used
non-linear audio signal conditioning technique in the areas of music
production, speech communication and broadcasting. Several alternative siamese
DNN architectures are proposed to learn feature embeddings that can
characterise subtle effects due to dynamic range compression. These models are
compared with each other as well as handcrafted features proposed in previous
work. The evaluation of the relations between the hyperparameters of DNN and
DRC parameters are also provided. The best model is able to produce a universal
feature embedding that is capable of predicting multiple DRC parameters
simultaneously, which is a significant improvement from our previous research.
The feature embedding shows better performance than handcrafted audio features
when predicting DRC parameters for both mono-instrument audio loops and
polyphonic music pieces.Comment: 8 pages, accepted in IJCNN 201
- …