235 research outputs found

    Qualification Conditions for Calculus Rules of Coderivatives of Multivalued Mappings

    Get PDF
    AbstractThis paper establishes by a general approach a full calculus for the limiting Fréchet and the approximate coderivatives of multivalued mappings. This approach allows us to produce several new verifiable qualification conditions for such calculus rules

    On the convexity of the value function for a class of nonconvex variational problems: existence and optimality conditions

    Get PDF
    In this paper we study a class of perturbed constrained nonconvex variational problems depending on either time/state or time/state's derivative variables. Its (optimal) value function is proved to be convex and then several related properties are obtained. Existence, strong duality results and necessary/sufficient optimality conditions are established. Moreover, via a necessary optimality condition in terms of Mordukhovich's normal cone, it is shown that local minima are global. Such results are given in terms of the Hamiltonian function. Finally various examples are exhibited showing the wide applicability of our main results

    On calmness of a class of multifunctions

    Get PDF
    The paper deals with calmness of a class of multifunctions in finite dimensions. Its first part is devoted to various calmness criteria which are derived in terms of coderivatives and subdifferentials. The second part demonstrates the importance of calmness in several areas of nonsmoooth analysis. In particular, we focus on nonsmooth calculus and solution stability in mathematical programming and in equilibrium problems. The derived conditions find a number of applications there

    Reconnaissance automatique du locuteur par des GMM Ă  grande marge

    Get PDF
    Depuis plusieurs dizaines d'annĂ©es, la reconnaissance automatique du locuteur (RAL) fait l'objet de travaux de recherche entrepris par de nombreuses Ă©quipes dans le monde. La majoritĂ© des systĂšmes actuels sont basĂ©s sur l'utilisation des ModĂšles de MĂ©lange de lois Gaussiennes (GMM) et/ou des modĂšles discriminants SVM, i.e., les machines Ă  vecteurs de support. Nos travaux ont pour objectif gĂ©nĂ©ral la proposition d'utiliser de nouveaux modĂšles GMM Ă  grande marge pour la RAL qui soient une alternative aux modĂšles GMM gĂ©nĂ©ratifs classiques et Ă  l'approche discriminante Ă©tat de l'art GMM-SVM. Nous appelons ces modĂšles LM-dGMM pour Large Margin diagonal GMM. Nos modĂšles reposent sur une rĂ©cente technique discriminante pour la sĂ©paration multi-classes, qui a Ă©tĂ© appliquĂ©e en reconnaissance de la parole. Exploitant les propriĂ©tĂ©s des systĂšmes GMM utilisĂ©s en RAL, nous prĂ©sentons dans cette thĂšse des variantes d'algorithmes d'apprentissage discriminant des GMM minimisant une fonction de perte Ă  grande marge. Des tests effectuĂ©s sur les tĂąches de reconnaissance du locuteur de la campagne d'Ă©valuation NIST-SRE 2006 dĂ©montrent l'intĂ©rĂȘt de ces modĂšles en reconnaissance.Most of state-of-the-art speaker recognition systems are based on Gaussian Mixture Models (GMM), trained using maximum likelihood estimation and maximum a posteriori (MAP) estimation. The generative training of the GMM does not however directly optimize the classification performance. For this reason, discriminative models, e.g., Support Vector Machines (SVM), have been an interesting alternative since they address directly the classification problem, and they lead to good performances. Recently a new discriminative approach for multiway classification has been proposed, the Large Margin Gaussian mixture models (LM-GMM). As in SVM, the parameters of LM-GMM are trained by solving a convex optimization problem. However they differ from SVM by using ellipsoids to model the classes directly in the input space, instead of half-spaces in an extended high-dimensional space. While LM-GMM have been used in speech recognition, they have not been used in speaker recognition (to the best of our knowledge). In this thesis, we propose simplified, fast and more efficient versions of LM-GMM which exploit the properties and characteristics of speaker recognition applications and systems, the LM-dGMM models. In our LM-dGMM modeling, each class is initially modeled by a GMM trained by MAP adaptation of a Universal Background Model (UBM) or directly initialized by the UBM. The models mean vectors are then re-estimated under some Large Margin constraints. We carried out experiments on full speaker recognition tasks under the NIST-SRE 2006 core condition. The experimental results are very satisfactory and show that our Large Margin modeling approach is very promising

    Error bounds and their application

    Get PDF
    Our aim in this paper is to present sufficient conditions for error bounds in terms of Frechet and limiting Frechet subdifferentials outside of Asplund spaces. This allows us to develop sufficient conditions in terms of the approximate subdifferential for systems of the form (í‘„, 푩) ∈ 퐶 × 퐷, 푔(í‘„, 푩, 푱) = 0, where 푔 takes values in an infinite dimensional space and 푱 plays the role of a parameter. This symmetric structure offers us the choice to impose condtions either on 퐶 or 퐷. We use these results to prove nonemptyness and weak-star compactness of Fritz-John and Karuch-Kuhn-Tucker multiplier sets, to establish Lipschitz continuity of the value function and to compute its subdifferential and finally to obtain results on local controllability in control problems of nonconvex unbounded differential inclusions

    Apprentissage discriminant des GMM à grande marge pour la vérification automatique du locuteur

    Get PDF
    National audienceGaussian mixture models (GMM) have been widely and successfully used in speaker recognition during the last decades. They are generally trained using the generative criterion of maximum likelihood estimation. In an earlier work, we proposed an algorithm for discriminative training of GMM with diagonal covariances under a large margin criterion. In this paper, we present a new version of this algorithm which has the major advantage of being computationally highly efficient. The resulting algorithm is thus well suited to handle large scale databases. To show the effectiveness of the new algorithm, we carry out a full NIST speaker verification task using NIST-SRE'2006 data. The results show that our system outperforms the baseline GMM, and with high computational efficiency

    Large Margin GMM for discriminative speaker verifi cation

    Get PDF
    International audienceGaussian mixture models (GMM), trained using the generative cri- terion of maximum likelihood estimation, have been the most popular ap- proach in speaker recognition during the last decades. This approach is also widely used in many other classi cation tasks and applications. Generative learning in not however the optimal way to address classi cation problems. In this paper we rst present a new algorithm for discriminative learning of diagonal GMM under a large margin criterion. This algorithm has the ma- jor advantage of being highly e cient, which allow fast discriminative GMM training using large scale databases. We then evaluate its performances on a full NIST speaker veri cation task using NIST-SRE'2006 data. In particular, we use the popular Symmetrical Factor Analysis (SFA) for session variability compensation. The results show that our system outperforms the state-of-the- art approaches of GMM-SFA and the SVM-based one, GSL-NAP. Relative reductions of the Equal Error Rate of about 9.33% and 14.88% are respec- tively achieved over these systems
    • 

    corecore