648 research outputs found

    Scoring heterogeneous speaker vectors using nonlinear transformations and tied PLDa models

    Get PDF
    Most current state-of-the-art text-independent speaker recognition systems are based on i-vectors, and on probabilistic linear discriminant analysis (PLDA). PLDA assumes that the i-vectors of a trial are homogeneous, i.e., that they have been extracted by the same system. In other words, the enrollment and test i-vectors belong to the same class. However, it is sometimes important to score trials including “heterogeneous” i-vectors, for instance, enrollment i-vectors extracted by an old system, and test i-vectors extracted by a newer, more accurate, system. In this paper, we introduce a PLDA model that is able to score heterogeneous i-vectors independent of their extraction approach, dimensions, and any other characteristics that make a set of i-vectors of the same speaker belong to different classes. The new model, which will be referred to as nonlinear tied-PLDA (NL-Tied-PLDA), is obtained by a generalization of our recently proposed nonlinear PLDA approach, which jointly estimates the PLDA parameters and the parameters of a nonlinear transformation of the i-vectors. The generalization consists of estimating a class-dependent nonlinear transformation of the i-vectors, with the constraint that the transformed i-vectors of the same speaker share the same speaker factor. The resulting model is flexible and accurate, as assessed by the results of a set of experiments performed on the extended core NIST SRE 2012 evaluation. In particular, NL-Tied-PLDA provides better results on heterogeneous trials with respect to the corresponding homogeneous trials scored by the old system, and, in some configurations, it also reaches the accuracy of the new system. Similar results were obtained on the female-extended core NIST SRE 2010 telephone condition

    Homomorphic Encryption for Speaker Recognition: Protection of Biometric Templates and Vendor Model Parameters

    Full text link
    Data privacy is crucial when dealing with biometric data. Accounting for the latest European data privacy regulation and payment service directive, biometric template protection is essential for any commercial application. Ensuring unlinkability across biometric service operators, irreversibility of leaked encrypted templates, and renewability of e.g., voice models following the i-vector paradigm, biometric voice-based systems are prepared for the latest EU data privacy legislation. Employing Paillier cryptosystems, Euclidean and cosine comparators are known to ensure data privacy demands, without loss of discrimination nor calibration performance. Bridging gaps from template protection to speaker recognition, two architectures are proposed for the two-covariance comparator, serving as a generative model in this study. The first architecture preserves privacy of biometric data capture subjects. In the second architecture, model parameters of the comparator are encrypted as well, such that biometric service providers can supply the same comparison modules employing different key pairs to multiple biometric service operators. An experimental proof-of-concept and complexity analysis is carried out on the data from the 2013-2014 NIST i-vector machine learning challenge

    Advances in Hyperspectral Image Classification: Earth monitoring with statistical learning methods

    Full text link
    Hyperspectral images show similar statistical properties to natural grayscale or color photographic images. However, the classification of hyperspectral images is more challenging because of the very high dimensionality of the pixels and the small number of labeled examples typically available for learning. These peculiarities lead to particular signal processing problems, mainly characterized by indetermination and complex manifolds. The framework of statistical learning has gained popularity in the last decade. New methods have been presented to account for the spatial homogeneity of images, to include user's interaction via active learning, to take advantage of the manifold structure with semisupervised learning, to extract and encode invariances, or to adapt classifiers and image representations to unseen yet similar scenes. This tutuorial reviews the main advances for hyperspectral remote sensing image classification through illustrative examples.Comment: IEEE Signal Processing Magazine, 201

    Which Theory of Language for Deep Neural Networks? Speech and Cognition in Humans and Machines

    Get PDF
    he paper explores the relationship between technology and semiosis from the perspective of natural language processing, i.e. signs systems automated learning by deep neural networks. Two theoretical approaches to the artificial intelligence problem are compared: the internalist paradigm, which conceives the link between cognition and language as extrinsic, and the externalist paradigm, which understands cognitive human activity as constitutively linguistic. The basic assumptions of internalism are widely discussed. After witnessing its incompatibility with neural network implementations of verbal thinking, the paper goes on exploring the externalist paradigm and its consistency with neural network language modeling. After a thorough illustration of the Saussurian conception of the mechanism of language systems, and some insights into the functioning of verbal thinking according to Vygotsky, the externalist paradigm is established as the best verbal thinking representation to be implemented on deep neural networks. Afterwards, the functioning of deep neural networks for language modeling is illustrated. Firstly, a basic explanation of the multilayer perceptron is provided, then, the Word2Vec model is introduced, and finally the Transformer model, the current the state-of-the-art architecture for natural language processing, is illustrated. The consistency between the externalist representation of language systems and the vector representation employed by the transformer model, prove that only the externalist approach can provide an answer to the problem of modeling and replicating human cognitio

    Automatic sleep staging of EEG signals: recent development, challenges, and future directions.

    Get PDF
    Modern deep learning holds a great potential to transform clinical studies of human sleep. Teaching a machine to carry out routine tasks would be a tremendous reduction in workload for clinicians. Sleep staging, a fundamental step in sleep practice, is a suitable task for this and will be the focus in this article. Recently, automatic sleep-staging systems have been trained to mimic manual scoring, leading to similar performance to human sleep experts, at least on scoring of healthy subjects. Despite tremendous progress, we have not seen automatic sleep scoring adopted widely in clinical environments. This review aims to provide the shared view of the authors on the most recent state-of-the-art developments in automatic sleep staging, the challenges that still need to be addressed, and the future directions needed for automatic sleep scoring to achieve clinical value

    Drug Reviews: Cross-condition and Cross-source Analysis by Review Quantification Using Regional CNN-LSTM Models

    Get PDF
    Pharmaceutical drugs are usually rated by customers or patients (i.e. in a scale from 1 to 10). Often, they also give reviews or comments on the drug and its side effects. It is desirable to quantify the reviews to help analyze drug favorability in the market, in the absence of ratings. Since these reviews are in the form of text, we should use lexical methods for the analysis. The intent of this study was two-fold: First, to understand how better the efficiency will be if CNN-LSTM models are used to predict ratings or sentiment from reviews. These models are known to perform better than usual machine learning models in the case of textual data sequences. Second, how effective is it to migrate such information extraction models across different drug review data sets and across different disease conditions. Therefore three experiments were designed, first, an In-domain experiment where train and test data are from the same dataset. Two more experiments were conducted to examine the migration capability of models, namely cross-data source, where train and test are from different sources and cross-disease condition model training, where train and test data belong to different disease conditions in the same dataset. The experiments were evaluated using popular metrics such as RMSE, MAE, R2 and Pearson’s coefficient and the results showed that the proposed deep learning regression model works less successfully when compared to the machine learning sentiment extraction models in the literature, which were done on the same datasets. But, this study contributes to the existing literature in the quantity of research work done and in quality of the model and also suggests the future researchers on how to improve. This work also addressed the shortcomings in the literature by introducin

    Knee-point-conscious battery aging trajectory prediction of lithium-ion based on physics-guided machine learning

    Get PDF
    Early prediction of aging trajectories of lithium-ion (Li-ion) batteries is critical for cycle life testing, quality control, and battery health management. Although data-driven machine learning (ML) approaches are well suited for this task, unfortunately, relying solely on data is exceedingly time-consuming and resource-intensive, even in accelerated aging with complex aging mechanisms. This challenge is rooted in the highly complex and time-varying degradation mechanisms of Li-ion battery cells. We propose a novel method based on physics-guided machine learning (PGML) to overcome this issue. First, electrode-level physical information is incorporated into the model training process to predict the aging trajectory’s knee point (KP). The relationship between the identified KP and the accelerated aging behavior is then explored, and an aging trajectory prediction algorithm is developed. The prior knowledge of aging mechanisms enables a transfer of valuable physical insights to yield accurate KP predictions with small data and weak correlation feature relationship. Based on a Li[NiCoMn]O\ua02\ua0cell dataset, we demonstrate that only 14 cells are needed to train a PGML model for achieving a lifetime prediction error of 2.02% using the data of the first 50 cycles. In contrast, at least 100 cells are needed to reach this level of accuracy without the physical insights

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl
    • …
    corecore