68,246 research outputs found
Analyzing Visual Mappings of Traditional and Alternative Music Notation
In this paper, we postulate that combining the domains of information
visualization and music studies paves the ground for a more structured analysis
of the design space of music notation, enabling the creation of alternative
music notations that are tailored to different users and their tasks. Hence, we
discuss the instantiation of a design and visualization pipeline for music
notation that follows a structured approach, based on the fundamental concepts
of information and data visualization. This enables practitioners and
researchers of digital humanities and information visualization, alike, to
conceptualize, create, and analyze novel music notation methods. Based on the
analysis of relevant stakeholders and their usage of music notation as a mean
of communication, we identify a set of relevant features typically encoded in
different annotations and encodings, as used by interpreters, performers, and
readers of music. We analyze the visual mappings of musical dimensions for
varying notation methods to highlight gaps and frequent usages of encodings,
visual channels, and Gestalt laws. This detailed analysis leads us to the
conclusion that such an under-researched area in information visualization
holds the potential for fundamental research. This paper discusses possible
research opportunities, open challenges, and arguments that can be pursued in
the process of analyzing, improving, or rethinking existing music notation
systems and techniques.Comment: 5 pages including references, 3rd Workshop on Visualization for the
Digital Humanities, Vis4DH, IEEE Vis 201
Music Visualization Using Source Separated Stereophonic Music
This thesis introduces a music visualization system for stereophonic source separated music. Music visualization systems are a popular way to represent information from audio signals through computer graphics. Visualization can help people better understand music and its complex and interacting elements. This music visualization system extracts pitch, panning, and loudness features from source separated audio files to create the visual. Most state-of-the art visualization systems develop their visual representation of the music from either the fully mixed final song recording, where all of the instruments and vocals are combined into one file, or from the digital audio workstation (DAW) data containing multiple independent recordings of individual audio sources. Original source recordings are not always readily available to the public so music source separation (MSS) can be used to obtain estimated versions of the audio source files. This thesis surveys different approaches to MSS and music visualization as well as introduces a new music visualization system specifically for source separated music
Generating Music from Literature
We present a system, TransProse, that automatically generates musical pieces
from text. TransProse uses known relations between elements of music such as
tempo and scale, and the emotions they evoke. Further, it uses a novel
mechanism to determine sequences of notes that capture the emotional activity
in the text. The work has applications in information visualization, in
creating audio-visual e-books, and in developing music apps
LaunchpadGPT: Language Model as Music Visualization Designer on Launchpad
Launchpad is a musical instrument that allows users to create and perform
music by pressing illuminated buttons. To assist and inspire the design of the
Launchpad light effect, and provide a more accessible approach for beginners to
create music visualization with this instrument, we proposed the LaunchpadGPT
model to generate music visualization designs on Launchpad automatically. Based
on the language model with excellent generation ability, our proposed
LaunchpadGPT takes an audio piece of music as input and outputs the lighting
effects of Launchpad-playing in the form of a video (Launchpad-playing video).
We collect Launchpad-playing videos and process them to obtain music and
corresponding video frame of Launchpad-playing as prompt-completion pairs, to
train the language model. The experiment result shows the proposed method can
create better music visualization than random generation methods and hold the
potential for a broader range of music visualization applications. Our code is
available at https://github.com/yunlong10/LaunchpadGPT/.Comment: Accepted by International Computer Music Conference (ICMC) 202
The Visualization and Representation of Electroacoustic Music
In Chapters 1 and 2 there are definitions and a review of electroacoustic music, and then visualization generally and as applied to music. Chapter 3 is a review of specific and relevant literature as regards to the visualization of electroacoustic music. Chapter 4 introduces the concepts of imagining as opposed to discovering new sound, and what is important to this research about these terms; in addition what is meant and indicated by them. Chapter 5 deals with the responses that composers currently working have made to the enquiry concerning visualization. In this chapter these responses are dealt with as case studies. In a similar way, Chapter 6 looks at some examples of historical work in electroacoustic music, again as case studies. In Chapter 7 a taxonomical structure for the use of visualization in electroacoustic composition is established and derived from the case study results. Chapter 8 looks at relevant examples of software and how they offer visualization case studies. Chapter 9 looks at the place of the archive in various stages of the compositional process. Chapter 10 investigates the problems of visualizing musical timbre as possible evidence for future strategies. Chapter 11 offers some conclusions and implications as to the main research questions, as well as more specific outlines of potential strategies for the visualization of electroacoustic music
Audio-Based Visualization of Expressive Body Movements in Music Performance: An Evaluation of Methodology in Three Electroacoustic Compositions
An increase in collaboration amongst visual artists, performance artists, musicians, and programmers has given rise to the exploration of multimedia performance arts. A methodology for audio-based visualization has been created that integrates the information of sound with the visualization of physical expressions, with the goal of magnifying the expressiveness of the performance. The emphasis is placed on exalting the music by using the audio to affect and enhance the video processing, while the video does not affect the audio at all. In this sense the music is considered to be autonomous of the video. The audio-based visualization can provide the audience with a deeper appreciation of the music. Unique implementations of the methodology have been created for three compositions. A qualitative analysis of each implementation is employed to evaluate both the technological and aesthetic merits for each composition
Leech: BitTorrent and Music Piracy Sonification
This paper provides an overview of a multi-media composition, Leech, which aurally and visually renders BitTor- rent traffic. The nature and usage of BitTorrent networking is discussed, including the implications of widespread music piracy. The traditional usage of borrowed musical material as a compositional resource is discussed and expanded upon by including the actual procurement of the musical material as part of the performance of the piece.
The technology and tools required to produce this work, and the roles that they serve, are presented. Eight distinct streams of data are targeted for visualization and sonification: Torrent progress, download/upload rate, file name/size, number of peers, peer download progress, peer location, packet transfer detection, and the music being pirated. An overview of the methods used for sonifying and and visualizing this data in an artistic manner is presented
- ā¦