9 research outputs found
Computer-aided Melody Note Transcription Using the Tony Software: Accuracy and Efficiency
accepteddate-added: 2015-05-24 19:18:46 +0000 date-modified: 2017-12-28 10:36:36 +0000 keywords: Tony, melody, note, transcription, open source software bdsk-url-1: https://code.soundsoftware.ac.uk/attachments/download/1423/tony-paper_preprint.pdfdate-added: 2015-05-24 19:18:46 +0000 date-modified: 2017-12-28 10:36:36 +0000 keywords: Tony, melody, note, transcription, open source software bdsk-url-1: https://code.soundsoftware.ac.uk/attachments/download/1423/tony-paper_preprint.pdfWe present Tony, a software tool for the interactive an- notation of melodies from monophonic audio recordings, and evaluate its usability and the accuracy of its note extraction method. The scientific study of acoustic performances of melodies, whether sung or played, requires the accurate transcription of notes and pitches. To achieve the desired transcription accuracy for a particular application, researchers manually correct results obtained by automatic methods. Tony is an interactive tool directly aimed at making this correction task efficient. It provides (a) state-of-the art algorithms for pitch and note estimation, (b) visual and auditory feedback for easy error-spotting, (c) an intelligent graphical user interface through which the user can rapidly correct estimation errors, (d) extensive export functions enabling further processing in other applications. We show that Tony’s built in automatic note transcription method compares favourably with existing tools. We report how long it takes to annotate recordings on a set of 96 solo vocal recordings and study the effect of piece, the number of edits made and the annotator’s increasing mastery of the software. Tony is Open Source software, with source code and compiled binaries for Windows, Mac OS X and Linux available from https://code.soundsoftware.ac.uk/projects/tony/
Recommended from our members
A computational study on outliers in world music
The comparative analysis of world music cultures has been the focus of several ethnomusicological studies in the last century. With the advances of Music Information Retrieval and the increased accessibility of sound archives, large-scale analysis of world music with computational tools is today feasible. We investigate music similarity in a corpus of 8200 recordings of folk and traditional music from 137 countries around the world. In particular, we aim to identify music recordings that are most distinct compared to the rest of our corpus. We refer to these recordings as ‘outliers’. We use signal processing tools to extract music information from audio recordings, data mining to quantify similarity and detect outliers, and spatial statistics to account for geographical correlation. Our findings suggest that Botswana is the country with the most distinct recordings in the corpus and China is the country with the most distinct recordings when considering spatial correlation. Our analysis includes a comparison of musical attributes and styles that contribute to the ‘uniqueness’ of the music of each country
An Analysis/synthesis framework for automatic F0 annotation of multitrack datasets
Comunicació presentada a: ISMIR 2017, celebrat a Suzhou, Xina, del 23 al 27 d'octubre de 2017.Generating continuous f0 annotations for tasks such as
melody extraction and multiple f0 estimation typically involves
running a monophonic pitch tracker on each track
of a multitrack recording and manually correcting any estimation
errors. This process is labor intensive and time
consuming, and consequently existing annotated datasets
are very limited in size. In this paper we propose a framework
for automatically generating continuous f0 annotations
without requiring manual refinement: the estimate
of a pitch tracker is used to drive an analysis/synthesis
pipeline which produces a synthesized version of the track.
Any estimation errors are now reflected in the synthesized
audio, meaning the tracker’s output represents an accurate
annotation. Analysis is performed using a wide-band
harmonic sinusoidal modeling algorithm which estimates
the frequency, amplitude and phase of every harmonic,
meaning the synthesized track closely resembles the original
in terms of timbre and dynamics. Finally the synthesized
track is automatically mixed back into the multitrack.
The framework can be used to annotate multitrack datasets
for training learning-based algorithms. Furthermore, we
show that algorithms evaluated on the automatically generated/
annotated mixes produce results that are statistically
indistinguishable from those they produce on the original,
manually annotated, mixes. We release a software library
implementing the proposed framework, along with new
datasets for melody, bass and multiple f0 estimation
The role of CXCR3/LRP1 cross-talk in the invasion of primary brain tumors
CXCR3 plays important roles in angiogenesis, inflammation, and cancer. However, the precise mechanism of regulation and activity in tumors is not well known. We focused on CXCR3-A conformation and on the mechanisms controlling its activity and trafficking and investigated the role of CXCR3/LRP1 cross talk in tumor cell invasion. Here we report that agonist stimulation induces an anisotropic response with conformational changes of CXCR3-A along its longitudinal axis. CXCR3-A is internalized via clathrin-coated vesicles and recycled by retrograde trafficking. We demonstrate that CXCR3-A interacts with LRP1. Silencing of LRP1 leads to an increase in the magnitude of ligand-induced conformational change with CXCR3A focalized at the cell membrane, leading to a sustained receptor activity and an increase in tumor cell migration. This was validated in patient-derived glioma cells and patient samples. Our study defines LRP1 as a regulator of CXCR3, which may have important consequences for tumor biology