24 research outputs found

    Automatic annotation of musical audio for interactive applications

    Get PDF
    PhDAs machines become more and more portable, and part of our everyday life, it becomes apparent that developing interactive and ubiquitous systems is an important aspect of new music applications created by the research community. We are interested in developing a robust layer for the automatic annotation of audio signals, to be used in various applications, from music search engines to interactive installations, and in various contexts, from embedded devices to audio content servers. We propose adaptations of existing signal processing techniques to a real time context. Amongst these annotation techniques, we concentrate on low and mid-level tasks such as onset detection, pitch tracking, tempo extraction and note modelling. We present a framework to extract these annotations and evaluate the performances of different algorithms. The first task is to detect onsets and offsets in audio streams within short latencies. The segmentation of audio streams into temporal objects enables various manipulation and analysis of metrical structure. Evaluation of different algorithms and their adaptation to real time are described. We then tackle the problem of fundamental frequency estimation, again trying to reduce both the delay and the computational cost. Different algorithms are implemented for real time and experimented on monophonic recordings and complex signals. Spectral analysis can be used to label the temporal segments; the estimation of higher level descriptions is approached. Techniques for modelling of note objects and localisation of beats are implemented and discussed. Applications of our framework include live and interactive music installations, and more generally tools for the composers and sound engineers. Speed optimisations may bring a significant improvement to various automated tasks, such as automatic classification and recommendation systems. We describe the design of our software solution, for our research purposes and in view of its integration within other systems.EU-FP6-IST-507142 project SIMAC (Semantic Interaction with Music Audio Contents); EPSRC grants GR/R54620; GR/S75802/01

    Mustang Daily, September 12, 1988

    Get PDF
    Student newspaper of California Polytechnic State University, San Luis Obispo, CA.https://digitalcommons.calpoly.edu/studentnewspaper/4818/thumbnail.jp

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    Caracterização de sons Confortáveis e Stressantes através da aprendizagem máquina

    Get PDF
    Nesta investigação, procurámos fazer uma caracterização dicotómica de sons Confortáveis ou Stressantes, através do uso de aprendizagem máquina. Para além de trazer à luz informações sobre características subjacentes a sons simples que criem uma valência subjectiva para o ouvinte médio, imaginamos que os resultados deste tipo de classificação podem contribuir para a criação de um sistemas de aconselhamento na criação de design sonoro para interfaces com o utilizador em produtos ou aplicações. Para o desenvolvimento do sistema foi necessário criar um dataset temático. Foram depois extraídos descritores áudio de baixo nível, de cada exemplo do dataset. Finalmente, utilizámos estes dados para alimentar algoritmos de aprendizagem máquina. Os resultados foram avaliados à luz das estratégias comuns em sistemas de Music Information Retrieval (MIR) e indicaram a possibilidade da criação de um sistema automático de caracterização sonora.In this research, we have looked into a dichotomous characterization of sounds as either “Comfortable” or “Stressful”, through the use of machine learning. In addition to bringing light to information about underlying features the simple sounds that create a subjective medium for the listener, we envision that the results of this type of classification can contribute to the creation of an advisory system for the creation of sound design in user interfaces for products or applications. For the development of the system it was necessary to create a dataset. Lowlevel audio descriptors were then extracted for each instance of the dataset. Finally, we have used this data to feed machine learning algorithms. The results were evaluated in light of the common strategies in Music Information Retrieval (MIR) and indicated the viability of setting up an automatic sound characterization system

    Contributions for the automatic description of multimodal scenes

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    The Daily Egyptian, March 02, 1983

    Get PDF

    Newman v. Google

    Get PDF
    3rd amended complain

    Across frequency processes involved in auditory detection of coloration

    Get PDF
    corecore