7 research outputs found

    Empreintes audio et stratégies d'indexation associées pour l'identification audio à grande échelle

    Get PDF
    N this work we give a precise definition of large scale audio identification. In particular, we make a distinction between exact and approximate matching. In the first case, the goal is to match two signals coming from one same recording with different post-processings. In the second case, the goal is to match two signals that are musically similar. In light of these definitions, we conceive and evaluate different audio-fingerprint models.Dans cet ouvrage, nous définissons précisément ce qu’est l’identification audio à grande échelle. En particulier, nous faisons une distinction entre l’identification exacte, destinée à rapprocher deux extraits sonores provenant d’un même enregistrement, et l’identification approchée, qui gère également la similarité musicale entre les signaux. A la lumière de ces définitions, nous concevons et examinons plusieurs modèles d’empreinte audio et évaluons leurs performances, tant en identification exacte qu’en identificationapprochée

    Deteção de Publicidade em Conteúdos de Difusão

    Get PDF
    Esta dissertação tem o objetivo de desenvolver um conjunto de módulos e algoritmos que permitam proceder à análise, deteção e identificação automática de conteúdos publicitários numa transmissão televisiva. Alguns dos anúncios publicitários são previamente conhecidos e através do uso de ferramentas de resumo de vídeo são recolhidos dados relativos aos mesmos que são posteriormente armazenados numa base de dados que deve ser optimizada de modo a que o tempo de pesquisa seja suficientemente rápido para permitir a identificação dos diferentes conteúdos em tempo real

    Overlapped speech and music segmentation using singular spectrum analysis and random forests

    Get PDF
    Recent years have seen ever-increasing volumes of digital media archives and an enormous amount of user-contributed content. As demand for indexing and searching these resources has increased, and new technologies such as multimedia content management systems, en-hanced digital broadcasting, and semantic web have emerged, audio information mining and automated metadata generation have received much attention. Manual indexing and metadata tagging are time-consuming and subject to the biases of individual workers. An automated architecture able to extract information from audio signals, generate content-related text descriptors or metadata, and enable further information mining and searching would be a tangible and valuable solution. In the field of audio classification, audio signals may be broadly divided into speech or music. Most studies, however, neglect the fact that real audio soundtracks may have either speech or music, or a combination of the two, and this is considered the major hurdle to achieving high performance in automatic audio classification, since overlapping can contaminate relevant characteristics and features, causing incorrect classification or information loss. This research undertakes an extensive review of the state of the art by outlining the well-established audio features and machine learning techniques that have been applied in a broad range of audio segmentation and recognition areas. Audio classification systems and the suggested solutions for the mixed soundtracks problem are presented. The suggested solutions can be listed as follows: developing augmented and modified features for recognising audio classes even in the presence of overlaps between them; robust segmentation of a given overlapped soundtrack stream depends on an innovative method of audio decomposition using Singular Spectrum Analysis (SSA) that has been studied extensively and has received increasing attention in the past two decades as a time series decomposition method with many applications; adoption and development of driven classification methods; and finally a technique for continuous time series tasks. In this study, SSA has been investigated and found to be an efficient way to discriminate speech/music in mixed soundtracks by two different methods, each of which has been developed and validated in this research. The first method serves to mitigate the overlapping ratio between speech and music in the mixed soundtracks by generating two new soundtracks with a lower level of overlapping. Next, feature space is calculated for the output audio streams, and these are classified using random forests into either speech or music. One of the distinct characteristics of this method is the separation of the speech/music key features that lead to improve the classification performance. Nevertheless, that did encounter a few obstructions, including excessively long processing time, increased storage requirements (each frame symbolised by two outputs), and this all leads to greater computational load than previously. Meanwhile, the second method em-ploys the SSA technique to decompose a given audio signal into a series of Principal Components (PCs), where each PC corresponds to a particular pattern of oscillation. Then, the transformed well-established feature is measured for each PC in order to classify it into either speech or music based on the baseline classification system using a RF machine learning technique. The classification performance of real-world soundtracks is effectively improved, which is demonstrated by comparing speech/music recognition using conventional classification methods and the proposed SSA method. The second proposed and de-veloped method can detect pure speech, pure music, and mix with a much lower complexity level

    Automatic detection of known advertisements in radio broadcast with data-driven ALISP transcriptions

    No full text
    International audienceThis paper describes an audio indexing system to search for known advertisements in radio broadcast streams, using automatically acquired segmental units. These segmental units called ALISP units are acquired automatically using temporal decomposition and vector quantization and modeled by Hidden Markov Models (HMMs). To detect commercials, ALISP transcriptions of reference advertisements are compared to those of radio stream using the Leven-shtein distance. The system is described and evaluated using broadcast streams provided by YACAST. On a set of 802 advertisements we achieve a mean precision of 95% with the corresponding recall value of 97%. The results show that the system is robust in situations where the advertisement to detect is stretched or suffer from time distortions. Moreover, this system allowed us to detect some annotation error
    corecore