16 research outputs found
Mobile Sound Recognition for the Deaf and Hard of Hearing
Human perception of surrounding events is strongly dependent on audio cues.
Thus, acoustic insulation can seriously impact situational awareness. We
present an exploratory study in the domain of assistive computing, eliciting
requirements and presenting solutions to problems found in the development of
an environmental sound recognition system, which aims to assist deaf and hard
of hearing people in the perception of sounds. To take advantage of smartphones
computational ubiquity, we propose a system that executes all processing on the
device itself, from audio features extraction to recognition and visual
presentation of results. Our application also presents the confidence level of
the classification to the user. A test of the system conducted with deaf users
provided important and inspiring feedback from participants.Comment: 25 pages, 8 figure
Alineación audio-partitura para música ejecutada con flauta traversa.
En este trabajo se aborda el problema de la alineación entre audio y partitura para música ejecutada con flauta traversa. Con ese fin, se hace un estudio del estado del arte en el área, asà como una descripción de la naturaleza de las señales de flauta en correspondencia con las técnicas de ejecución del instrumento. Se plantea una solución al problema para señales de flauta ejecutadas con técnicas tradicionales y su evaluación de desempeño en una base de datos desarrollada para el propósito. En complemento, se plantean los desafÃos que presenta el repertorio contemporáneo ejecutado con técnicas extendidas. Además, la base de datos se hace disponible para futuros trabajos en el área con fines académicos
Tracking beats and microtiming in Afro-Latin American music using conditional random fields and deep learning
Trabajo presentado en ISMIR 2019 : 20th Conference of the International Society for Music Information Retrieval, Delft, Netherlands, 4-8 nov, 2019PostprintEvents in music frequently exhibit small-scale temporal deviations (microtiming), with respect to the underlying regular metrical grid. In some cases, as in music from the Afro-Latin American tradition, such deviations appear systematically, disclosing their structural importance in rhythmic and stylistic configuration. In this work we explore the idea of automatically and jointly tracking beats and microtiming in timekeeper instruments of Afro-Latin American music, in particular Brazilian samba and Uruguayan candombe. To that end, we propose a language model based on conditional random fields that integrates beat and onset likelihoods as observations. We derive those activations using deep neural networks and evaluate its performance on manually annotated data using a scheme adapted to this task. We assess our approach in controlled conditions suitable for these timekeeper instruments, and study the microtiming profiles’ dependency on genre and performer, illustrating promising aspects of this technique towards a more comprehensive understanding of these music traditions