56 research outputs found
Towards the automated analysis of simple polyphonic music : a knowledge-based approach
PhDMusic understanding is a process closely related to the knowledge and experience
of the listener. The amount of knowledge required is relative to the
complexity of the task in hand.
This dissertation is concerned with the problem of automatically decomposing
musical signals into a score-like representation. It proposes that, as
with humans, an automatic system requires knowledge about the signal and
its expected behaviour to correctly analyse music.
The proposed system uses the blackboard architecture to combine the
use of knowledge with data provided by the bottom-up processing of the
signal's information. Methods are proposed for the estimation of pitches,
onset times and durations of notes in simple polyphonic music.
A method for onset detection is presented. It provides an alternative to
conventional energy-based algorithms by using phase information. Statistical
analysis is used to create a detection function that evaluates the expected
behaviour of the signal regarding onsets.
Two methods for multi-pitch estimation are introduced. The first concentrates
on the grouping of harmonic information in the frequency-domain.
Its performance and limitations emphasise the case for the use of high-level
knowledge.
This knowledge, in the form of the individual waveforms of a single
instrument, is used in the second proposed approach. The method is based
on a time-domain linear additive model and it presents an alternative to
common frequency-domain approaches.
Results are presented and discussed for all methods, showing that, if
reliably generated, the use of knowledge can significantly improve the quality
of the analysis.Joint Information Systems Committee (JISC) in the UK National Science Foundation (N.S.F.) in the United states. Fundacion Gran Mariscal Ayacucho in Venezuela
An Outlook into the Future of Egocentric Vision
What will the future be? We wonder! In this survey, we explore the gap
between current research in egocentric vision and the ever-anticipated future,
where wearable computing, with outward facing cameras and digital overlays, is
expected to be integrated in our every day lives. To understand this gap, the
article starts by envisaging the future through character-based stories,
showcasing through examples the limitations of current technology. We then
provide a mapping between this future and previously defined research tasks.
For each task, we survey its seminal works, current state-of-the-art
methodologies and available datasets, then reflect on shortcomings that limit
its applicability to future research. Note that this survey focuses on software
models for egocentric vision, independent of any specific hardware. The paper
concludes with recommendations for areas of immediate explorations so as to
unlock our path to the future always-on, personalised and life-enhancing
egocentric vision.Comment: We invite comments, suggestions and corrections here:
https://openreview.net/forum?id=V3974SUk1
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training
Self-supervised learning (SSL) has recently emerged as a promising paradigm
for training generalisable models on large-scale data in the fields of vision,
text, and speech. Although SSL has been proven effective in speech and audio,
its application to music audio has yet to be thoroughly explored. This is
primarily due to the distinctive challenges associated with modelling musical
knowledge, particularly its tonal and pitched characteristics of music. To
address this research gap, we propose an acoustic Music undERstanding model
with large-scale self-supervised Training (MERT), which incorporates teacher
models to provide pseudo labels in the masked language modelling (MLM) style
acoustic pre-training. In our exploration, we identified a superior combination
of teacher models, which outperforms conventional speech and audio approaches
in terms of performance. This combination includes an acoustic teacher based on
Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical
teacher based on the Constant-Q Transform (CQT). These teachers effectively
guide our student model, a BERT-style transformer encoder, to better model
music audio. In addition, we introduce an in-batch noise mixture augmentation
to enhance the representation robustness. Furthermore, we explore a wide range
of settings to overcome the instability in acoustic language model
pre-training, which allows our designed paradigm to scale from 95M to 330M
parameters. Experimental results indicate that our model can generalise and
perform well on 14 music understanding tasks and attains state-of-the-art
(SOTA) overall scores. The code and models are online:
https://github.com/yizhilll/MERT
A Cross-Cultural Analysis of Music Structure
PhDMusic signal analysis is a research field concerning the extraction of meaningful information
from musical audio signals. This thesis analyses the music signals from the note-level
to the song-level in a bottom-up manner and situates the research in two Music information
retrieval (MIR) problems: audio onset detection (AOD) and music structural
segmentation (MSS).
Most MIR tools are developed for and evaluated on Western music with specific musical
knowledge encoded. This thesis approaches the investigated tasks from a cross-cultural
perspective by developing audio features and algorithms applicable for both Western and
non-Western genres. Two Chinese Jingju databases are collected to facilitate respectively
the AOD and MSS tasks investigated.
New features and algorithms for AOD are presented relying on fusion techniques. We
show that fusion can significantly improve the performance of the constituent baseline
AOD algorithms. A large-scale parameter analysis is carried out to identify the relations
between system configurations and the musical properties of different music types.
Novel audio features are developed to summarise music timbre, harmony and rhythm for
its structural description. The new features serve as effective alternatives to commonly
used ones, showing comparable performance on existing datasets, and surpass them on
the Jingju dataset. A new segmentation algorithm is presented which effectively captures
the structural characteristics of Jingju. By evaluating the presented audio features and
different segmentation algorithms incorporating different structural principles for the
investigated music types, this thesis also identifies the underlying relations between audio
features, segmentation methods and music genres in the scenario of music structural
analysis.China Scholarship Council
EPSRC C4DM Travel Funding,
EPSRC Fusing Semantic and Audio Technologies for Intelligent Music Production and
Consumption (EP/L019981/1),
EPSRC Platform Grant on Digital Music (EP/K009559/1),
European Research Council project CompMusic, International Society for Music Information Retrieval Student Grant,
QMUL Postgraduate Research Fund,
QMUL-BUPT Joint Programme Funding
Women in Music Information Retrieval Grant
Revealing structure in vocalisations of parrots and social whales
This thesis proposes methods to investigate structure in bioacoustic signals. For this two frameworks are proposed. The first concerns the automatic annotation of audio recordings by using supervised machine learning methods. The second concerns a quantitative analysis of temporal and combinatorial patterns in vocal sequences of animals by using non-parametric statistics. These methods are used to investigate vocalisations of two wild living animals — known very little — in their natural ecosystems: lilac crowned parrots and pilot whales
Audio-coupled video content understanding of unconstrained video sequences
Unconstrained video understanding is a difficult task. The main aim of this thesis is to
recognise the nature of objects, activities and environment in a given video clip using
both audio and video information. Traditionally, audio and video information has not
been applied together for solving such complex task, and for the first time we propose,
develop, implement and test a new framework of multi-modal (audio and video) data
analysis for context understanding and labelling of unconstrained videos.
The framework relies on feature selection techniques and introduces a novel algorithm
(PCFS) that is faster than the well-established SFFS algorithm. We use the framework for
studying the benefits of combining audio and video information in a number of different
problems. We begin by developing two independent content recognition modules. The
first one is based on image sequence analysis alone, and uses a range of colour, shape,
texture and statistical features from image regions with a trained classifier to recognise
the identity of objects, activities and environment present. The second module uses audio
information only, and recognises activities and environment. Both of these approaches
are preceded by detailed pre-processing to ensure that correct video segments containing
both audio and video content are present, and that the developed system can be made
robust to changes in camera movement, illumination, random object behaviour etc. For
both audio and video analysis, we use a hierarchical approach of multi-stage
classification such that difficult classification tasks can be decomposed into simpler and
smaller tasks.
When combining both modalities, we compare fusion techniques at different levels of
integration and propose a novel algorithm that combines advantages of both feature and
decision-level fusion. The analysis is evaluated on a large amount of test data comprising
unconstrained videos collected for this work. We finally, propose a decision correction
algorithm which shows that further steps towards combining multi-modal classification
information effectively with semantic knowledge generates the best possible results
Deep Neural Networks for Music Tagging
PhDIn this thesis, I present my hypothesis, experiment results, and discussion that are related
to various aspects of deep neural networks for music tagging.
Music tagging is a task to automatically predict the suitable semantic label when music is
provided. Generally speaking, the input of music tagging systems can be any entity that
constitutes music, e.g., audio content, lyrics, or metadata, but only the audio content
is considered in this thesis. My hypothesis is that we can fi nd effective deep learning
practices for the task of music tagging task that improves the classi fication performance.
As a computational model to realise a music tagging system, I use deep neural networks.
Combined with the research problem, the scope of this thesis is the understanding,
interpretation, optimisation, and application of deep neural networks in the context of
music tagging systems.
The ultimate goal of this thesis is to provide insight that can help to improve deep
learning-based music tagging systems. There are many smaller goals in this regard.
Since using deep neural networks is a data-driven approach, it is crucial to understand the
dataset. Selecting and designing a better architecture is the next topic to discuss. Since
the tagging is done with audio input, preprocessing the audio signal becomes one of the
important research topics. After building (or training) a music tagging system, fi nding
a suitable way to re-use it for other music information retrieval tasks is a compelling
topic, in addition to interpreting the trained system.
The evidence presented in the thesis supports that deep neural networks are powerful
and credible methods for building a music tagging system
Designing virtual spaces: redefining radio art through digital control
Radio Art is a composition practice that is constantly evolving. Artists share a commonality to redefine, reinvent, and repurpose analogue radio. It is an art that often bends to the will of antiqued technology, celebrating a wide pallet of found sounds. This research extends the boundaries of the art form by exploring Radio Art through sonic-centric lens and establishing a consistent and reproducible compositional framework. By shifting radio from a found object to an instrument, I have deconstructed its sonic aesthetics into two parallel materials for composition, gestural noise and broadcast signal. When tuning an analogue radio to a signal, relationships between these materials unfold. Contrast is a term found throughout my research. Contrast is embodied throughout radio and its history; radio is used as both a scientific communication device and for artistic expression. it is a symbol of democracy and oppression. Radio produces broadcast noise and signal, creating poetic reception, such as control and chaos, anxiety and ecstasy, distance and closeness. This research explores the characteristics of these forces and materials as a symbiotic relationship of unfolding radiophonic behaviours. A major focus of this research is the control of analogue radio through deconstruction and composition. I embarked on a twenty-four-month development period to build a Digital Audio Workstation called Radiophonic Environmental Designer, (RED). RED enables composers to create virtual radiophonic environments that are navigated by rotating the dial. Material is positioned along a horizon, and tuning behaviours sculpted. There is also a physical interface embedded into an analogue radio shell to control the virtual tuning, namely, Broadcast Link-up Environment, (BLUE). BLUE is an ad-on program offering an online digital platform for the diffusion of Radio Art. Using an internet connection and gyroscope technology that is built into most smart phones, a radiophonic environment is interacted through a purpose-built website. In my creative practice, analogue radio has been redesigned by adopting digital technological practices to control, edit and model it’s unique sound. In doing so, I reflect upon relationships between analogue and digital design principles through an extensive study on virtual analogue software and interfaces
Bag-of-words representations for computer audition
Computer audition is omnipresent in everyday life, in applications ranging from personalised virtual agents to health care. From a technical point of view, the goal is to robustly classify the content of an audio signal in terms of a defined set of labels, such as, e.g., the acoustic scene, a medical diagnosis, or, in the case of speech, what is said or how it is said. Typical approaches employ machine learning (ML), which means that task-specific models are trained by means of examples. Despite recent successes in neural network-based end-to-end learning, taking the raw audio signal as input, models relying on hand-crafted acoustic features are still superior in some domains, especially for tasks where data is scarce. One major issue is nevertheless that a sequence of acoustic low-level descriptors (LLDs) cannot be fed directly into many ML algorithms as they require a static and fixed-length input. Moreover, also for dynamic classifiers, compressing the information of the LLDs over a temporal block by summarising them can be beneficial. However, the type of instance-level representation has a fundamental impact on the performance of the model. In this thesis, the so-called bag-of-audio-words (BoAW) representation is investigated as an alternative to the standard approach of statistical functionals. BoAW is an unsupervised method of representation learning, inspired from the bag-of-words method in natural language processing, forming a histogram of the terms present in a document. The toolkit openXBOW is introduced, enabling systematic learning and optimisation of these feature representations, unified across arbitrary modalities of numeric or symbolic descriptors. A number of experiments on BoAW are presented and discussed, focussing on a large number of potential applications and corresponding databases, ranging from emotion recognition in speech to medical diagnosis. The evaluations include a comparison of different acoustic LLD sets and configurations of the BoAW generation process. The key findings are that BoAW features are a meaningful alternative to statistical functionals, offering certain benefits, while being able to preserve the advantages of functionals, such as data-independence. Furthermore, it is shown that both representations are complementary and their fusion improves the performance of a machine listening system.Maschinelles Hören ist im täglichen Leben allgegenwärtig, mit Anwendungen, die von personalisierten virtuellen Agenten bis hin zum Gesundheitswesen reichen. Aus technischer Sicht besteht das Ziel darin, den Inhalt eines Audiosignals hinsichtlich einer Auswahl definierter Labels robust zu klassifizieren. Die Labels beschreiben bspw. die akustische Umgebung der Aufnahme, eine medizinische Diagnose oder - im Falle von Sprache - was gesagt wird oder wie es gesagt wird. Übliche Ansätze hierzu verwenden maschinelles Lernen, d.h., es werden anwendungsspezifische Modelle anhand von Beispieldaten trainiert. Trotz jüngster Erfolge beim Ende-zu-Ende-Lernen mittels neuronaler Netze, in welchen das unverarbeitete Audiosignal als Eingabe benutzt wird, sind Modelle, die auf definierten akustischen Merkmalen basieren, in manchen Bereichen weiterhin überlegen. Dies gilt im Besonderen für Einsatzzwecke, für die nur wenige Daten vorhanden sind. Allerdings besteht dabei das Problem, dass Zeitfolgen von akustischen Deskriptoren in viele Algorithmen des maschinellen Lernens nicht direkt eingespeist werden können, da diese eine statische Eingabe fester Länge benötigen. Außerdem kann es auch für dynamische (zeitabhängige) Klassifikatoren vorteilhaft sein, die Deskriptoren über ein gewisses Zeitintervall zusammenzufassen. Jedoch hat die Art der Merkmalsdarstellung einen grundlegenden Einfluss auf die Leistungsfähigkeit des Modells. In der vorliegenden Dissertation wird der sogenannte Bag-of-Audio-Words-Ansatz (BoAW) als Alternative zum Standardansatz der statistischen Funktionale untersucht. BoAW ist eine Methode des unüberwachten Lernens von Merkmalsdarstellungen, die von der Bag-of-Words-Methode in der Computerlinguistik inspiriert wurde, bei der ein Textdokument als Histogramm der vorkommenden Wörter beschrieben wird. Das Toolkit openXBOW wird vorgestellt, welches systematisches Training und Optimierung dieser Merkmalsdarstellungen - vereinheitlicht für beliebige Modalitäten mit numerischen oder symbolischen Deskriptoren - erlaubt. Es werden einige Experimente zum BoAW-Ansatz durchgeführt und diskutiert, die sich auf eine große Zahl möglicher Anwendungen und entsprechende Datensätze beziehen, von der Emotionserkennung in gesprochener Sprache bis zur medizinischen Diagnostik. Die Auswertungen beinhalten einen Vergleich verschiedener akustischer Deskriptoren und Konfigurationen der BoAW-Methode. Die wichtigsten Erkenntnisse sind, dass BoAW-Merkmalsvektoren eine geeignete Alternative zu statistischen Funktionalen darstellen, gewisse Vorzüge bieten und gleichzeitig wichtige Eigenschaften der Funktionale, wie bspw. die Datenunabhängigkeit, erhalten können. Zudem wird gezeigt, dass beide Darstellungen komplementär sind und eine Fusionierung die Leistungsfähigkeit eines Systems des maschinellen Hörens verbessert
- …