2,709 research outputs found
The Original Beat: An Electronic Music Production System and Its Design
The barrier to entry in electronic music production is high. It requires expensive, complicated software, extensive knowledge of music theory and experience with sound generation. Digital Audio Workstations (DAWs) are the main tools used to piece together digital sounds and produce a complete song. While these DAWs are great for music professionals, they have a steep learning curve for beginners and they must run native on a user’s computer. For a novice to begin creating music takes much more time, eort, and money than it should. We believe anyone who is interested in creating electronic music deserves a simple way to digitize their ideas and hear results. With this idea in mind, we created a web based, co-creative system to allow beginners and pros alike to easily create electronic digital music. We outline the requirements for such a system and detail the design and architecture. We go through the specifics of the system we implemented covering the front-end, back-end, server, and generation algorithms. Finally, we will review our development time-line, examine the challenges and risks that arose when building our system, and present future improvements
Deep Learning Techniques for Music Generation -- A Survey
This paper is a survey and an analysis of different ways of using deep
learning (deep artificial neural networks) to generate musical content. We
propose a methodology based on five dimensions for our analysis:
Objective - What musical content is to be generated? Examples are: melody,
polyphony, accompaniment or counterpoint. - For what destination and for what
use? To be performed by a human(s) (in the case of a musical score), or by a
machine (in the case of an audio file).
Representation - What are the concepts to be manipulated? Examples are:
waveform, spectrogram, note, chord, meter and beat. - What format is to be
used? Examples are: MIDI, piano roll or text. - How will the representation be
encoded? Examples are: scalar, one-hot or many-hot.
Architecture - What type(s) of deep neural network is (are) to be used?
Examples are: feedforward network, recurrent network, autoencoder or generative
adversarial networks.
Challenge - What are the limitations and open challenges? Examples are:
variability, interactivity and creativity.
Strategy - How do we model and control the process of generation? Examples
are: single-step feedforward, iterative feedforward, sampling or input
manipulation.
For each dimension, we conduct a comparative analysis of various models and
techniques and we propose some tentative multidimensional typology. This
typology is bottom-up, based on the analysis of many existing deep-learning
based systems for music generation selected from the relevant literature. These
systems are described and are used to exemplify the various choices of
objective, representation, architecture, challenge and strategy. The last
section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P.
Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music
Generation, Computational Synthesis and Creative Systems, Springer, 201
Evaluating musical software using conceptual metaphors
An open challenge for interaction designers is to find ways of designing software to enhance the ability of novices to perform tasks that normally require specialized domain expertise. This challenge is particularly demanding in areas such as music analysis, where complex, abstract, domain-specific concepts and notations occur. One promising theoretical foundation for this work involves the identification of conceptual metaphors and image schemas, found by analyzing discourse. This kind of analysis has already been applied, with some success, both to musical concepts and, separately, to user interface design. The present work appears to be the first to combine these hitherto distinct bodies of research, with the aim of devising a general method for improving user interfaces for music. Some areas where this may require extensions to existing method are noted.
This paper presents the results of an exploratory evaluation of Harmony Space, a tool for playing, analysing and learning about harmony. The evaluation uses conceptual metaphors and image schemas elicited from the dialogues of experienced musicians discussing the harmonic progressions in a piece of music. Examples of where the user interface supports the conceptual metaphors, and where support could be improved, are discussed. The potential use of audio output to support conceptual metaphors and image schemas is considered
Estimation of the direction of strokes and arpegios
Whenever a chord is played in a musical instrument, the notes are not commonly played at the same time. Actually, in some instruments, it is impossible to trigger multiple notes simultaneously. In others, the player can consciously select the order of the sequence of notes to play to create a chord. In either case, the notes in the chord can be played very fast, and they can be played from the lowest to the highest pitch note (upstroke) or from the highest to the lowest pitch note (downstroke).
In this paper, we describe a system to automatically estimate the direction of strokes and arpeggios from audio recordings. The proposed system is based on the analysis of the spectrogram to identify meaningful changes. In addition to the estimation of the up or down stroke direction, the proposed method provides information about the number of notes that constitute the chord, as well as the chord playing speed. The system has been tested with four different instruments: guitar, piano, autoharp and organ.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. This work has been funded by the Ministerio de Economía y Competitividad of the Spanish Government under Project No. TIN2013-47276-C6-2-R, by the Junta de Andalucía under Project No. P11-TIC-7154 and by the Ministerio de Educación, Cultura y Deporte through the Programa Nacional de Movilidad de Recursos Humanos del Plan Nacional de I-D+i 2008- 2011
Towards the automated analysis of simple polyphonic music : a knowledge-based approach
PhDMusic understanding is a process closely related to the knowledge and experience
of the listener. The amount of knowledge required is relative to the
complexity of the task in hand.
This dissertation is concerned with the problem of automatically decomposing
musical signals into a score-like representation. It proposes that, as
with humans, an automatic system requires knowledge about the signal and
its expected behaviour to correctly analyse music.
The proposed system uses the blackboard architecture to combine the
use of knowledge with data provided by the bottom-up processing of the
signal's information. Methods are proposed for the estimation of pitches,
onset times and durations of notes in simple polyphonic music.
A method for onset detection is presented. It provides an alternative to
conventional energy-based algorithms by using phase information. Statistical
analysis is used to create a detection function that evaluates the expected
behaviour of the signal regarding onsets.
Two methods for multi-pitch estimation are introduced. The first concentrates
on the grouping of harmonic information in the frequency-domain.
Its performance and limitations emphasise the case for the use of high-level
knowledge.
This knowledge, in the form of the individual waveforms of a single
instrument, is used in the second proposed approach. The method is based
on a time-domain linear additive model and it presents an alternative to
common frequency-domain approaches.
Results are presented and discussed for all methods, showing that, if
reliably generated, the use of knowledge can significantly improve the quality
of the analysis.Joint Information Systems Committee (JISC) in the UK National Science Foundation (N.S.F.) in the United states. Fundacion Gran Mariscal Ayacucho in Venezuela
Harmonic Change Detection from Musical Audio
In this dissertation, we advance an enhanced method for computing Harte et al.’s [31] Harmonic Change Detection Function (HCDF). HCDF aims to detect harmonic transitions in musical audio signals. HCDF is crucial both for the chord recognition in Music Information Retrieval (MIR) and a wide range of creative applications. In light of recent advances in harmonic description and transformation, we depart from the original architecture of Harte et al.’s HCDF, to revisit each one of its component blocks, which are evaluated using an exhaustive grid search aimed to identify optimal parameters across four large style-specific musical datasets. Our results show that the newly proposed methods and parameter optimization improve the detection of harmonic changes, by 5.57% (f-score) with respect to previous methods. Furthermore, while guaranteeing recall values at > 99%, our method improves precision by 6.28%. Aiming to leverage novel strategies for real-time harmonic-content audio processing, the optimized HCDF is made available for Javascript and the MAX and Pure Data multimedia programming environments. Moreover, all the data as well as the Python code used to generate them, are made available.<br /
On the Modeling of Musical Solos as Complex Networks
Notes in a musical piece are building blocks employed in non-random ways to
create melodies. It is the "interaction" among a limited amount of notes that
allows constructing the variety of musical compositions that have been written
in centuries and within different cultures. Networks are a modeling tool that
is commonly employed to represent a set of entities interacting in some way.
Thus, notes composing a melody can be seen as nodes of a network that are
connected whenever these are played in sequence. The outcome of such a process
results in a directed graph. By using complex network theory, some main metrics
of musical graphs can be measured, which characterize the related musical
pieces. In this paper, we define a framework to represent melodies as networks.
Then, we provide an analysis on a set of guitar solos performed by main
musicians. Results of this study indicate that the presented model can have an
impact on audio and multimedia applications such as music classification,
identification, e-learning, automatic music generation, multimedia
entertainment.Comment: to appear in Information Science, Elsevier. Please cite the paper
including such information. arXiv admin note: text overlap with
arXiv:1603.0497
Biomechanical Modelling of Musical Performance: A Case Study of the Guitar
Merged with duplicate record 10026.1/2517 on 07.20.2017 by CS (TIS)Computer-generated musical performances are often criticised for being unable
to match the expressivity found in performances by humans. Much research
has been conducted in the past two decades in order to create computer
technology able to perform a given piece music as expressively as humans,
largely without success. Two approaches have been often adopted to research
into modelling expressive music performance on computers. The first focuses
on sound; that is, on modelling patterns of deviations between a recorded
human performance and the music score. The second focuses on modelling the
cognitive processes involved in a musical performance. Both approaches are
valid and can complement each other. In this thesis we propose a third
complementary approach, focusing on the guitar, which concerns the physical
manipulation of the instrument by the performer: a biomechanical approach.
The essence of this thesis is a study on capturing, analyzing and modelling
information about motor and biomechanical processes of guitar performance.
The focus is on speed, precision, and force of a guitarist's left-hand. The
overarching questions behind our study are:
1) Do unintentional actions originating from motor and biomechanical
functions during musical performance contribute a material "human feel"
to the performance?
2) Would it be possible determine and quantify such unintentional actions? 3) Would it be possible to model and embed such information in a computer
system?
The contributionst o knowledgep ursued in this thesis include:
a) An unprecedented study of guitar mechanics, ergonomics, and
playability;
b) A detailed study of how the human body performs actions when playing
the guitar;
c) A methodologyt o formally record quantifiable data about such actionsin
performance;
d) An approach to model such information, and
e) A demonstration of how the above knowledge can be embeddedin a
system for music performance
- …