1,356 research outputs found
Is Vivaldi smooth and takete? Non-verbal sensory scales for describing music qualities
Studies on the perception of music qualities (such as induced or perceived emotions, performance styles, or timbre nuances) make a large use of verbal descriptors. Although many authors noted that particular music qualities can hardly be described by means of verbal labels, few studies have tried alternatives. This paper aims at exploring the use of non-verbal sensory scales, in order to represent different perceived qualities in Western classical music. Musically trained and untrained listeners were required to listen to six musical excerpts in major key and to evaluate them from a sensorial and semantic point of view (Experiment 1). The same design (Experiment 2) was conducted using musically trained and untrained listeners who were required to listen to six musical excerpts in minor key. The overall findings indicate that subjects\u2019 ratings on non-verbal sensory scales are consistent throughout and the results support the hypothesis that sensory scales can convey some specific sensations that cannot be described verbally, offering interesting insights to deepen our knowledge on the relationship between music and other sensorial experiences. Such research can foster interesting applications in the field of music information retrieval and timbre spaces explorations together with experiments applied to different musical cultures and contexts
Synthesis of variable dancing styles based on a compact spatiotemporal representation of dance
Dance as a complex expressive form of motion is able to convey emotion, meaning and social idiosyncrasies that opens channels for non-verbal communication, and promotes rich cross-modal interactions with music and the environment. As such, realistic dancing characters may incorporate crossmodal information and variability of the dance forms through compact representations that may describe the movement structure in terms of its spatial and temporal organization. In this paper, we propose a novel method for synthesizing beatsynchronous dancing motions based on a compact topological model of dance styles, previously captured with a motion capture system. The model was based on the Topological Gesture Analysis (TGA) which conveys a discrete three-dimensional point-cloud representation of the dance, by describing the spatiotemporal variability of its gestural trajectories into uniform spherical distributions, according to classes of the musical meter. The methodology for synthesizing the modeled dance traces back the topological representations, constrained with definable metrical and spatial parameters, into complete dance instances whose variability is controlled by stochastic processes that considers both TGA distributions and the kinematic constraints of the body morphology. In order to assess the relevance and flexibility of each parameter into feasibly reproducing the style of the captured dance, we correlated both captured and synthesized trajectories of samba dancing sequences in relation to the level of compression of the used model, and report on a subjective evaluation over a set of six tests. The achieved results validated our approach, suggesting that a periodic dancing style, and its musical synchrony, can be feasibly reproduced from a suitably parametrized discrete spatiotemporal representation of the gestural motion trajectories, with a notable degree of compression
Using fuzzy logic to handle the semantic descriptions of music in a content-based retrieval system
This paper explores the potential use of fuzzy logic for semantic music recommendation. We show that a set of affective/emotive, structural and kinaesthetic descriptors can be used to formulate a query which allows the retrieval of intended music. A semantic music recommendation system was built, based on an elaborate study of potential users and an analysis of the semantic descriptors that best characterize the user’s understanding of music. Significant relationships between expressive and structural semantic descriptions of music were found. Fuzzy logic was then applied to handle the
quality ratings associated with the semantic descriptions. A working semantic music recommendation system was tested and evaluated. Real-world testing revealed high user satisfaction
Methodological considerations concerning manual annotation of musical audio in function of algorithm development
In research on musical audio-mining, annotated music databases are needed which allow the development of computational tools that extract from the musical audiostream the kind of high-level content that users can deal with in Music Information Retrieval (MIR) contexts. The notion of musical content, and therefore the notion of annotation, is ill-defined, however, both in the syntactic and semantic sense. As a consequence, annotation has been approached from a variety of perspectives (but mainly linguistic-symbolic oriented), and a general methodology is lacking. This paper is a step towards the definition of a general framework for manual annotation of musical audio in function of a computational approach to musical audio-mining that is based on algorithms that learn from annotated data. 1
Using fuzzy logic to handle the users' semantic descriptions in a music retrieval system
This paper provides an investigation of the potential application of fuzzy logic to semantic music recommendation. We show that a set of affective/emotive, structural and kinaesthetic descriptors can be used to formulate a query which allows the retrieval of intended music. A semantic music recommendation system was built, based on an elaborate study of potential users of music information retrieval systems. In this study analysis was made of the descriptors that best characterize the user's understanding of music. Significant relationships between expressive and structural descriptions of music were found. A straightforward fuzzy logic methodology was then applied to handle the quality ratings associated with the descriptions. Rigorous real-world testing of the semantic music recommendation system revealed high user satisfaction
From expressive gesture to sound: the development of an embodied mapping trajectory inside a musical interface
This paper contributes to the development of a multimodal, musical tool that extends the natural action range of the human body to communicate expressiveness into the virtual music domain. The core of this musical tool consists of a low cost, highly functional computational model developed upon the Max/MSP platform that (1) captures real-time movement of the human body into a 3D coordinate system on the basis of the orientation output of any type of inertial sensor system that is OSC-compatible, (2) extract low-level movement features that specify the amount of contraction/expansion as a measure of how a subject uses the surrounding space, (3) recognizes these movement features as being expressive gestures, and (4) creates a mapping trajectory between these expressive gestures and the sound synthesis process of adding harmonic related voices on an in origin monophonic voice. The concern for a user-oriented and intuitive mapping strategy was thereby of central importance. This was achieved by conducting an empirical experiment based on theoretical concepts from the embodied music cognition paradigm. Based on empirical evidence, this paper proposes a mapping trajectory that facilitates the interaction between a musician and his instrument, the artistic collaboration between (multimedia) artists and the communication of expressiveness in a social, musical context
Design Strategies for Adaptive Social Composition: Collaborative Sound Environments
In order to develop successful collaborative music systems a variety
of subtle interactions need to be identified and integrated. Gesture
capture, motion tracking, real-time synthesis, environmental
parameters and ubiquitous technologies can each be effectively used
for developing innovative approaches to instrument design, sound
installations, interactive music and generative systems. Current
solutions tend to prioritise one or more of these approaches, refining
a particular interface technology, software design or compositional
approach developed for a specific composition, performer or
installation environment. Within this diverse field a group of novel
controllers, described as ‘Tangible Interfaces’ have been developed.
These are intended for use by novices and in many cases follow a
simple model of interaction controlling synthesis parameters through
simple user actions. Other approaches offer sophisticated
compositional frameworks, but many of these are idiosyncratic and
highly personalised. As such they are difficult to engage with and
ineffective for groups of novices. The objective of this research is to
develop effective design strategies for implementing collaborative
sound environments using key terms and vocabulary drawn from the
available literature. This is articulated by combining an empathic
design process with controlled sound perception and interaction
experiments. The identified design strategies have been applied to
the development of a new collaborative digital instrument. A range
of technical and compositional approaches was considered to define
this process, which can be described as Adaptive Social Composition.
Dan Livingston
The gesture's narrative : contemporary music for percussion
Musical performance gestures are recognized by the majority of theoreticians as a critical
factor of a musical performance.
The aim of the musical performance may consist in not only communicating the musical
signs that form a musical piece, but conveying the meaningful succession of gestures, facial
expressions and body movements. This meaningful succession, or otherwise the “gesture´s
narrative” is assumed to be quite important for the process of directing the audience towards
the intended interpretation.
Recording music allowed audiences to listen to music without having to go to a musical event
for this purpose. On the one hand, this made the listening experience more intense, allowing
to concentrate on the aural information exclusively, but, on the other hand, it also imposed
restrictions on people’s perception, as the syncretic listening and seeing experience became
separated into constituents.
Gestures can be considered as operating features of a person’s perception-action system. It
presupposes significance of a meaning that involves more than just a physical movement.
Movements can be subdivided into specific patterns and conceptualized. Conceptualized
gestures are kept in people’s minds as single units, and the subdivision operations are carried
out both by performers and the audience. Musical communication through gestures is
therefore not about movement only, it should be viewed as structured interactions.
For this research, solo percussion contemporary music performance will be analyzed.
Overall, percussive music performance is extremely wide, and is accompanied by bright
visual images provided by musicians themselves. From this perspective, observation over
percussionists’ playing manner and it´s audience provides the researcher an opportunity to
understand a narrative ability of music through musicians’ gestures. The quantitative research
design divided in three experiments was chosen for the purpose of this study, which can be
referred to as the description of the objective reality by using numbers in order to construct meaningful models reflecting various relationships between objects or phenomena. These
numerical entities are not the reality itself, but a way of representing it.
Moreover, the chosen experimental design gives an opportunity to not only establish the
existence of certain effects of one variable on the other one, but also study the magnitude of
these effects, considering the major two research questions:
Is it possible to detect a percussive gesture’s narrative ?
How does the percussive gesture influences the perception of musical narrative?Os gestos que produzem o som são reconhecidos pela maioria dos teóricos como um
factor determinante da performance musical e da sua percepção. A conexão entre os gestos,
sons e percepção de determinado discurso musical foi já abordada por um amplo número de
cientistas, ainda que não haja um claro consenso quanto à medida em que essa conexão é
fundamental ou quanto às operações cognitivas subjacentes à percepção de uma peça
musical. O objetivo da interpretação em música consistirá não apenas em comunicar os sinais
musicais que formam uma obra, mas também em transmitir a sucessão significativa de
gestos, expressões e movimentos do corpo. Esta sucessão significativa, ou de outra forma
exposto, a “narrativa do gesto” é considerada muito importante para o processo de condução
de um público para a interpretação pretendida. O avanço da tecnologia neste estádio de
desenvolvimento da sociedade, criou excelentes oportunidades para permitir a separação de
atividades auditivas e visuais da música. A gravação e posterior difusão musical, permitiu que
o público consumisse música sem ter que, para essa finalidade, presenciar um evento
musical. Por um lado, esse fenómeno tornou a experiência de escuta mais frequente e
porventura mais focada, permitindo ao ouvinte concentrar-se na informação auditiva
exclusivamente. Mas, por outro lado, também impôs restrições a uma experiência musical
sincrética com o ouvir e ouvir e ver, a separaram-se em constituintes dentro do fenómeno
musical.
Os gestos podem ser considerados características de funcionamento do sistema de
percepção/ acção de um ser humano. Pressupõe isso a atribuição de expressão a um
significado que envolve mais do que apenas um movimento físico. Os movimentos podem
ser subdivididos em padrões específicos e conceptualizados. Estes gestos conceptualizados
são mantidos como unidades singulares, e as operações de subdivisão significantes são
levadas a cabo tanto pelos performers como pelo seu público. A comunicação musical através
de gestos, não deve, portanto, ser olhada apenas sobre os aspectos do movimento, mas sim
como uma interação estruturada e musicalmente contextualizada. Os processos descritos
acima resultam em grande parte do ambiente de envolvimento do individuo ouvinte e
dependem fortemente da sua singularidade e contexto cultural. Nem todos os movimentos
poderão ser chamados gestos performativos para além daqueles aqueles cuja acção é de
natureza intencionalmente expressiva ou inerente à produção de som. Nesta pesquisa, a performance de música contemporânea para percussão solo será analisada.
De uma maneira geral, o desempenho dos percussionistas é, do ponto de vista visual,
extremamente rico na formação de gestos. Nessa perspectiva, a observação de uma audiência
sujeita à sua presença, com e sem contacto visual com a sua acção, fornece uma oportunidade
de aproximação ao estudo do gesto e da sua narrativa, do ponto de vista da percepção do
discurso musical.
Um desenho de pesquisa quantitativa dividida em três experiências, foi o caminho
escolhido para o presente estudo. Produziu-se uma descrição da realidade objectiva usando
números, de modo a construir modelos significativos que pudessem reflectir as várias
relações entre objectos ou fenómenos . Estas entidades numéricas não serão assim uma
realidade em si, mas uma maneira possível de a representar . O processo de experimentação,
dividido em três partes, dá-nos a oportunidade para perceber não só a existência dos efeitos
de uma variável sobre a outra (visual e auditiva), mas também permite uma reflexão sobre a
magnitude desses efeitos, tentando assim responder ás questões que levam a esta
investigação:
É possivel detectar uma narrativa no gesto percussivo?
Como é que o gesto percussivo influencia a percepção do discurso musical
- …