397,221 research outputs found
Is music conscious? The argument from motion, and other considerations
Music is often described in anthropomorphic terms. This paper suggests that if we think about music in certain ways we could think of it as conscious. Motional characteristics give music the impression of being alive, but musical motion is conventionally taken as metaphorical. The first part of this paper argues that metaphor may not be the exclusive means of understanding musical motion – there could also be literal ways. Discussing kinds of consciousness, particularly “access consciousness” (Block 1995), the second part proposes ways in which music could (hypothetically) be conscious. The conclusion states that a greater understanding of the interactions of “phenomenal consciousness” and “access consciousness” is important in conceptualizing non-human consciousnesses, such as music might be conceived to be
A Conceptual Framework for Motion Based Music Applications
Imaginary projections are the core of the framework for motion
based music applications presented in this paper. Their design depends
on the space covered by the motion tracking device, but also
on the musical feature involved in the application. They can be considered
a very powerful tool because they allow not only to project
in the virtual environment the image of a traditional acoustic instrument,
but also to express any spatially defined abstract concept.
The system pipeline starts from the musical content and, through a
geometrical interpretation, arrives to its projection in the physical
space. Three case studies involving different motion tracking devices
and different musical concepts will be analyzed. The three
examined applications have been programmed and already tested
by the authors. They aim respectively at musical expressive interaction
(Disembodied Voices), tonal music knowledge (Harmonic
Walk) and XX century music composition (Hand Composer)
Multimodal music information processing and retrieval: survey and future challenges
Towards improving the performance in various music information processing
tasks, recent studies exploit different modalities able to capture diverse
aspects of music. Such modalities include audio recordings, symbolic music
scores, mid-level representations, motion, and gestural data, video recordings,
editorial or cultural tags, lyrics and album cover arts. This paper critically
reviews the various approaches adopted in Music Information Processing and
Retrieval and highlights how multimodal algorithms can help Music Computing
applications. First, we categorize the related literature based on the
application they address. Subsequently, we analyze existing information fusion
approaches, and we conclude with the set of challenges that Music Information
Retrieval and Sound and Music Computing research communities should focus in
the next years
The sound motion controller: a distributed system for interactive music performance
We developed an interactive system for music performance, able to
control sound parameters in a responsive way with respect to the
user’s movements. This system is conceived as a mobile application,
provided with beat tracking and an expressive parameter modulation,
interacting with motion sensors and effector units, which are
connected to a music output, such as synthesizers or sound effects.
We describe the various types of usage of our system and our
achievements, aimed to increase the expression of music
performance and provide an aid to music interaction. The results
obtained outline a first level of integration and foresee future
cognitive and technological research related to it
Emotion resonance and divergence: a semiotic analysis of music and sound in 'The Lost Thing', an animated short film and 'Elizabeth' a film trailer
Music and sound contributions of interpersonal meaning to film narratives may be different from or similar to meanings made by language and image, and dynamic interactions between several modalities may generate new story messages. Such interpretive potentials of music and voice sound in motion pictures are rarely considered in social semiotic investigations of intermodality. This paper therefore shares two semiotic studies of distinct and combined music, English speech and image systems in an animated short film and a promotional filmtrailer. The paper considers the impact of music and voice sound on interpretations of film narrative meanings. A music system relevant to the analysis of filmic emotion is proposed. Examples show how music and intonation contribute meaning to lexical, visual and gestural elements of the cinematic spaces. Also described are relations of divergence and resonance between emotion types in various couplings of music, intonation, words and images across story phases. The research is relevant to educational knowledge about sound, and semiotic studies of multimodality
The impact of music and stretched time on pupillary responses and eye movements in slow-motion film scenes
This study investigated the effects of music and playback speed on arousal and visual perception in slow-motion scenes taken from commercial films. Slow-motion scenes are a ubiquitous film technique and highly popular. Yet the psychological effects of mediated time-stretching compared to real-time motion have not been empirically investigated. We hypothesised that music affects arousal and attentional processes. Furthermore, we assumed that playback speed influences viewers’ visual perception, resulting in a higher number of eye movements and larger gaze dispersion. Thirty-nine participants watched three film excerpts in a repeated-measures design in conditions with or without music and in slow motion vs. adapted real-time motion (both visual-only). Results show that music in slow-motion film scenes leads to higher arousal compared to no music as indicated by larger pupil diameters in the former. There was no systematic effect of music on visual perception in terms of eye movements. Playback speed influenced visual perception in eye movement parameters such that slow motion resulted in more and shorter fixations as well as more saccades compared to adapted real-time motion. Furthermore, in slow motion there was a higher gaze dispersion and a smaller centre bias, indicating that individuals attended to more detail in slow motion scenes
MDSC: Towards Evaluating the Style Consistency Between Music and
We propose MDSC(Music-Dance-Style Consistency), the first evaluation metric
which assesses to what degree the dance moves and music match. Existing metrics
can only evaluate the fidelity and diversity of motion and the degree of
rhythmic matching between music and motion. MDSC measures how stylistically
correlated the generated dance motion sequences and the conditioning music
sequences are. We found that directly measuring the embedding distance between
motion and music is not an optimal solution. We instead tackle this through
modelling it as a clustering problem. Specifically, 1) we pre-train a music
encoder and a motion encoder, then 2) we learn to map and align the motion and
music embedding in joint space by jointly minimizing the intra-cluster distance
and maximizing the inter-cluster distance, and 3) for evaluation purpose, we
encode the dance moves into embedding and measure the intra-cluster and
inter-cluster distances, as well as the ratio between them. We evaluate our
metric on the results of several music-conditioned motion generation methods,
combined with user study, we found that our proposed metric is a robust
evaluation metric in measuring the music-dance style correlation. The code is
available at: https://github.com/zixiangzhou916/MDSC.Comment: 17 pages, 17 figur
Automatic Dance Generation System Considering Sign Language Information
In recent years, thanks to the development of 3DCG animation editing tools (e.g. MikuMikuDance), a lot of 3D character dance animation movies are created by amateur users. However it is very difficult to create choreography from scratch without any technical knowledge. Shiratori et al. [2006] produced the dance automatic generation system considering rhythm and intensity of dance motions. However each segment is selected randomly from database, so the generated dance motion has no linguistic or emotional meanings. Takano et al. [2010] produced a human motion generation system considering motion labels. However they use simple motion labels like “running” or “jump”, so they cannot generate motions that express emotions. In reality, professional dancers make choreography based on music features or lyrics in music, and express emotion or how they feel in music. In our work, we aim at generating more emotional dance motion easily. Therefore, we use linguistic information in lyrics, and generate dance motion.
In this paper, we propose the system to generate the sign dance motion from continuous sign language motion based on lyrics of music. This system could help the deaf to listen to music as visualized music application
Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates
In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures
- …