227,095 research outputs found

    Joint Bayesian Gaussian discriminant analysis for speaker verification

    Full text link
    State-of-the-art i-vector based speaker verification relies on variants of Probabilistic Linear Discriminant Analysis (PLDA) for discriminant analysis. We are mainly motivated by the recent work of the joint Bayesian (JB) method, which is originally proposed for discriminant analysis in face verification. We apply JB to speaker verification and make three contributions beyond the original JB. 1) In contrast to the EM iterations with approximated statistics in the original JB, the EM iterations with exact statistics are employed and give better performance. 2) We propose to do simultaneous diagonalization (SD) of the within-class and between-class covariance matrices to achieve efficient testing, which has broader application scope than the SVD-based efficient testing method in the original JB. 3) We scrutinize similarities and differences between various Gaussian PLDAs and JB, complementing the previous analysis of comparing JB only with Prince-Elder PLDA. Extensive experiments are conducted on NIST SRE10 core condition 5, empirically validating the superiority of JB with faster convergence rate and 9-13% EER reduction compared with state-of-the-art PLDA.Comment: accepted by ICASSP201

    Testing for Homogeneity with Kernel Fisher Discriminant Analysis

    Get PDF
    We propose to investigate test statistics for testing homogeneity in reproducing kernel Hilbert spaces. Asymptotic null distributions under null hypothesis are derived, and consistency against fixed and local alternatives is assessed. Finally, experimental evidence of the performance of the proposed approach on both artificial data and a speaker verification task is provided

    Simulation technique for available bandwidth estimation

    Full text link
    The paper proposes a method for measuring available bandwidth, based on testing network packets of various sizes (Variable Packet Size method, VPS). The boundaries of applicability of the model have been found, which are based on the accuracy of measurements of packet delays, also we have derived a formula of measuring the upper limit of bandwidth. The computer simulation has been performed and relationship between the measurement error of available bandwidth and the number of measurements has been found. Experimental verification with the use of RIPE Test Box measuring system has shown that the suggested method has advantages over existing measurement techniques. Pathload utility has been chosen as an alternative technique of measurement, and to ensure reliable results statistics by SNMP agent has been withdrawn directly from the router

    Linear optics schemes for entanglement distribution with realistic single-photon sources

    Full text link
    We study the operation of linear optics schemes for entanglement distribution based on nonlocal photon subtraction when input states, produced by imperfect single-photon sources, exhibit both vacuum and multiphoton contributions. Two models for realistic photon statistics with radically different properties of the multiphoton "tail" are considered. The first model assumes occasional emission of double photons and linear attenuation, while the second one is motivated by heralded sources utilizing spontaneous parametric down-conversion. We find conditions for the photon statistics that guarantee generation of entanglement in the relevant qubit subspaces and compare it with classicality criteria. We also quantify the amount of entanglement that can be produced with imperfect single-photon sources, optimized over setup parameters, using as a measure entanglement of formation. Finally, we discuss verification of the generated entanglement by testing Bell's inequalities. The analysis is carried out for two schemes. The first one is the well-established one-photon scheme, which produces a photon in a delocalized superposition state between two nodes, each of them fed with one single photon at the input. As the second scheme, we introduce and analyze a linear-optics analog of the robust scheme based on interfering two Stokes photons emitted by atomic ensembles, which does not require phase stability between the nodes.Comment: 12 pages, 7 figures, title change, minor corrections in the tex

    Model Verification and the Likelihood Principle

    Get PDF
    The likelihood principle (LP) is typically understood as a constraint on any measure of evidence arising from a statistical experiment. It is not sufficiently often noted, however, that the LP assumes that the probability model giving rise to a particular concrete data set must be statistically adequate—it must “fit” the data sufficiently. In practice, though, scientists must make modeling assumptions whose adequacy can nevertheless then be verified using statistical tests. My present concern is to consider whether the LP applies to these techniques of model verification. If one does view model verification as part of the inferential procedures that the LP intends to constrain, then there are certain crucial tests of model verification that no known method satisfying the LP can perform. But if one does not, the degree to which these assumptions have been verified is bracketed from the evidential evaluation under the LP. Although I conclude from this that the LP cannot be a universal constraint on any measure of evidence, proponents of the LP may hold out for a restricted version thereof, either as a kind of “ideal” or as defining one among many different forms of evidence
    • 

    corecore