221 research outputs found

    EnCodecMAE: Leveraging neural codecs for universal audio representation learning

    Full text link
    The goal of universal audio representation learning is to obtain foundational models that can be used for a variety of downstream tasks involving speech, music or environmental sounds. To approach this problem, methods inspired by self-supervised models from NLP, like BERT, are often used and adapted to audio. These models rely on the discrete nature of text, hence adopting this type of approach for audio processing requires either a change in the learning objective or mapping the audio signal to a set of discrete classes. In this work, we explore the use of EnCodec, a neural audio codec, to generate discrete targets for learning an universal audio model based on a masked autoencoder (MAE). We evaluate this approach, which we call EncodecMAE, on a wide range of audio tasks spanning speech, music and environmental sounds, achieving performances comparable or better than leading audio representation models.Comment: Submitted to ICASSP 202

    A Speaker Verification Backend with Robust Performance across Conditions

    Full text link
    In this paper, we address the problem of speaker verification in conditions unseen or unknown during development. A standard method for speaker verification consists of extracting speaker embeddings with a deep neural network and processing them through a backend composed of probabilistic linear discriminant analysis (PLDA) and global logistic regression score calibration. This method is known to result in systems that work poorly on conditions different from those used to train the calibration model. We propose to modify the standard backend, introducing an adaptive calibrator that uses duration and other automatically extracted side-information to adapt to the conditions of the inputs. The backend is trained discriminatively to optimize binary cross-entropy. When trained on a number of diverse datasets that are labeled only with respect to speaker, the proposed backend consistently and, in some cases, dramatically improves calibration, compared to the standard PLDA approach, on a number of held-out datasets, some of which are markedly different from the training data. Discrimination performance is also consistently improved. We show that joint training of the PLDA and the adaptive calibrator is essential -- the same benefits cannot be achieved when freezing PLDA and fine-tuning the calibrator. To our knowledge, the results in this paper are the first evidence in the literature that it is possible to develop a speaker verification system with robust out-of-the-box performance on a large variety of conditions

    A systematic study of genome context methods: calibration, normalization and combination

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Genome context methods have been introduced in the last decade as automatic methods to predict functional relatedness between genes in a target genome using the patterns of existence and relative locations of the homologs of those genes in a set of reference genomes. Much work has been done in the application of these methods to different bioinformatics tasks, but few papers present a systematic study of the methods and their combination necessary for their optimal use.</p> <p>Results</p> <p>We present a thorough study of the four main families of genome context methods found in the literature: phylogenetic profile, gene fusion, gene cluster, and gene neighbor. We find that for most organisms the gene neighbor method outperforms the phylogenetic profile method by as much as 40% in sensitivity, being competitive with the gene cluster method at low sensitivities. Gene fusion is generally the worst performing of the four methods. A thorough exploration of the parameter space for each method is performed and results across different target organisms are presented.</p> <p>We propose the use of normalization procedures as those used on microarray data for the genome context scores. We show that substantial gains can be achieved from the use of a simple normalization technique. In particular, the sensitivity of the phylogenetic profile method is improved by around 25% after normalization, resulting, to our knowledge, on the best-performing phylogenetic profile system in the literature.</p> <p>Finally, we show results from combining the various genome context methods into a single score. When using a cross-validation procedure to train the combiners, with both original and normalized scores as input, a decision tree combiner results in gains of up to 20% with respect to the gene neighbor method. Overall, this represents a gain of around 15% over what can be considered the state of the art in this area: the four original genome context methods combined using a procedure like that used in the STRING database. Unfortunately, we find that these gains disappear when the combiner is trained only with organisms that are phylogenetically distant from the target organism.</p> <p>Conclusions</p> <p>Our experiments indicate that gene neighbor is the best individual genome context method and that gains from the combination of individual methods are very sensitive to the training data used to obtain the combiner's parameters. If adequate training data is not available, using the gene neighbor score by itself instead of a combined score might be the best choice.</p

    Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option

    Get PDF
    The output scores of most of the speaker recognition systems are not directly interpretable as stand-alone values. For this reason, a calibration step is usually performed on the scores to convert them into proper likelihood ratios, which have a clear probabilistic interpretation. The standard calibration approach transforms the system scores using a linear function trained using data selected to closely match the evaluation conditions. This selection, though, is not feasible when the evaluation conditions are unknown. In previous work, we proposed a calibration approach for this scenario called trial-based calibration (TBC). TBC trains a separate calibration model for each test trial using data that is dynamically selected from a candidate training set to match the conditions of the trial. In this work, we extend the TBC method, proposing: 1) a new similarity metric for selecting training data that result in significant gains over the one proposed in the original work; 2) a new option that enables the system to reject a trial when not enough matched data are available for training the calibration model; and 3) the use of regularization to improve the robustness of the calibration models trained for each trial. We test the proposed algorithms on a development set composed of several conditions and on the Federal Bureau of Investigation multi-condition speaker recognition dataset, and we demonstrate that the proposed approach reduces calibration loss to values close to 0 for most of the conditions when matched calibration data are available for selection, and that it can reject most of the trials for which relevant calibration data are unavailable.Fil: Ferrer, Luciana. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Investigación en Ciencias de la Computación. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Investigación en Ciencias de la Computación; ArgentinaFil: Nandwana, Mahesh Kumar. No especifíca;Fil: McLaren, Mitchell. No especifíca;Fil: Castan, Diego. No especifíca;Fil: Lawson, Aaron. No especifíca

    Detección de palabras claves en lenguajes sin datos de entrenamiento

    Get PDF
    Estudiamos el problema de detección de palabras claves (key-word-spotting) para idiomas que no disponen de corpus de datos con grabaciones y transcripciones fonéticas. Este problema es de central importancia para poder realizar búsquedas en bases de datos de grabaciones de habla. Usando el Boston University Radio Speech Corpus como corpus de referencia, analizamos diversas topologías y parametrizaciones de Modelos Ocultos de Markov para la detección de palabras sobre habla continua. Los modelos se basan en el uso de "fillers" para palabras no buscadas, y empleamos fonemas como unidades mínimas de detección. Para las pruebas, utilizamos un conjunto de 20 keywords entrenadas con 14 minutos de datos transcriptos y fillers entrenados con 7 horas sin transcripciones. Los resultados muestran que el mejor modelo alcanza rendimientos superiores a un 0.47 de FOM promedio, un porcentaje de detecciones correctas del 72.1% y 3.95 falsas alarmas por hora por keyword.XI Workshop Bases de Datos y Minería de DatosRed de Universidades con Carreras de Informática (RedUNCI

    Detección de palabras claves en lenguajes sin datos de entrenamiento

    Get PDF
    Estudiamos el problema de detección de palabras claves (key-word-spotting) para idiomas que no disponen de corpus de datos con grabaciones y transcripciones fonéticas. Este problema es de central importancia para poder realizar búsquedas en bases de datos de grabaciones de habla. Usando el Boston University Radio Speech Corpus como corpus de referencia, analizamos diversas topologías y parametrizaciones de Modelos Ocultos de Markov para la detección de palabras sobre habla continua. Los modelos se basan en el uso de "fillers" para palabras no buscadas, y empleamos fonemas como unidades mínimas de detección. Para las pruebas, utilizamos un conjunto de 20 keywords entrenadas con 14 minutos de datos transcriptos y fillers entrenados con 7 horas sin transcripciones. Los resultados muestran que el mejor modelo alcanza rendimientos superiores a un 0.47 de FOM promedio, un porcentaje de detecciones correctas del 72.1% y 3.95 falsas alarmas por hora por keyword.XI Workshop Bases de Datos y Minería de DatosRed de Universidades con Carreras de Informática (RedUNCI
    • …
    corecore