564 research outputs found

    Speaker verification using sequence discriminant support vector machines

    Get PDF
    This paper presents a text-independent speaker verification system using support vector machines (SVMs) with score-space kernels. Score-space kernels generalize Fisher kernels and are based on underlying generative models such as Gaussian mixture models (GMMs). This approach provides direct discrimination between whole sequences, in contrast with the frame-level approaches at the heart of most current systems. The resultant SVMs have a very high dimensionality since it is related to the number of parameters in the underlying generative model. To address problems that arise in the resultant optimization we introduce a technique called spherical normalization that preconditions the Hessian matrix. We have performed speaker verification experiments using the PolyVar database. The SVM system presented here reduces the relative error rates by 34% compared to a GMM likelihood ratio system

    Novel Detection and Analysis using Deep Variational Autoencoders

    Get PDF
    This paper presents a Novel Identiļ¬cation System which uses generative modeling techniques and Gaussian Mixture Models (GMMs) to identify the main process variables involved in a novel event from multivariate data. Features are generated and subsequently dimensionally reduced by using a Variational Autoencoder (VAE) supplemented by a denoising criterion and a Ī² disentangling method. The GMM parameters are learned using the Expectation Maximization(EM) algorithm on features collected from only normal operating conditions. A one-class classiļ¬cation is achieved by thresholding the likelihoods by a statistically derived value. The Novel Identiļ¬cation method is veriļ¬ed as a detection method on existing Radio Frequency (RF) Generators and standard classiļ¬cation datasets. The RF dataset contains 2 diļ¬€erent models of generators with almost 100 unique units tested. Novel Detection on these generators achieved an average testing true positive rate of 97.31% with an overall target class accuracy of 98.16%. A second application has the network evaluate process variables of the RF generators when a novel event is detected. This is achieved by using the VAE decoding layers to map the GMM parameters back to a space equivalent to the original input, resulting in a way to directly estimate the process variables ļ¬tness

    Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    Full text link
    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications

    Jet Diffusion versus JetGPT -- Modern Networks for the LHC

    Full text link
    We introduce two diffusion models and an autoregressive transformer for LHC physics simulations. Bayesian versions allow us to control the networks and capture training uncertainties. After illustrating their different density estimation methods for simple toy models, we discuss their advantages for Z plus jets event generation. While diffusion networks excel through their precision, the transformer scales best with the phase space dimensionality. Given the different training and evaluation speed, we expect LHC physics to benefit from dedicated use cases for normalizing flows, diffusion models, and autoregressive transformers.Comment: 37 pages, 17 figure

    The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction

    Get PDF
    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex

    Dissecting the Gravitational Lens B1608+656. II. Precision Measurements of the Hubble Constant, Spatial Curvature, and the Dark Energy Equation of State

    Get PDF
    Strong gravitational lens systems with measured time delays between the multiple images provide a method for measuring the "time-delay distance" to the lens, and thus the Hubble constant. We present a Bayesian analysis of the strong gravitational lens system B1608+656, incorporating (i) new, deep Hubble Space Telescope (HST) observations, (ii) a new velocity dispersion measurement of 260+/-15 km/s for the primary lens galaxy, and (iii) an updated study of the lens' environment. When modeling the stellar dynamics of the primary lens galaxy, the lensing effect, and the environment of the lens, we explicitly include the total mass distribution profile logarithmic slope gamma' and the external convergence kappa_ext; we marginalize over these parameters, assigning well-motivated priors for them, and so turn the major systematic errors into statistical ones. The HST images provide one such prior, constraining the lens mass density profile logarithmic slope to be gamma'=2.08+/-0.03; a combination of numerical simulations and photometric observations of the B1608+656 field provides an estimate of the prior for kappa_ext: 0.10 +0.08/-0.05. This latter distribution dominates the final uncertainty on H_0. Compared with previous work on this system, the new data provide an increase in precision of more than a factor of two. In combination with the WMAP 5-year data set, we find that the B1608+656 data set constrains the curvature parameter to be -0.031 < Omega_k < 0.009 (95% CL), a level of precision comparable to that afforded by the current Type Ia SNe sample. Asserting a flat spatial geometry, we find that, in combination with WMAP, H_0 = 69.7 +4.9/-5.0 km/s/Mpc and w=-0.94 +0.17/-0.19 (68% CL), suggesting that the observations of B1608+656 constrain w as tightly as do the current Baryon Acoustic Oscillation data. (abridged)Comment: 24 pages, 8 figures, revisions based on referee's comments, accepted for publication in Ap
    • ā€¦
    corecore