288 research outputs found

    Deformed Statistics Kullback-Leibler Divergence Minimization within a Scaled Bregman Framework

    Full text link
    The generalized Kullback-Leibler divergence (K-Ld) in Tsallis statistics [constrained by the additive duality of generalized statistics (dual generalized K-Ld)] is here reconciled with the theory of Bregman divergences for expectations defined by normal averages, within a measure-theoretic framework. Specifically, it is demonstrated that the dual generalized K-Ld is a scaled Bregman divergence. The Pythagorean theorem is derived from the minimum discrimination information-principle using the dual generalized K-Ld as the measure of uncertainty, with constraints defined by normal averages. The minimization of the dual generalized K-Ld, with normal averages constraints, is shown to exhibit distinctly unique features.Comment: 16 pages. Iterative corrections and expansion

    Nonextensive statistics: Theoretical, experimental and computational evidences and connections

    Full text link
    The domain of validity of standard thermodynamics and Boltzmann-Gibbs statistical mechanics is discussed and then formally enlarged in order to hopefully cover a variety of anomalous systems. The generalization concerns {\it nonextensive} systems, where nonextensivity is understood in the thermodynamical sense. This generalization was first proposed in 1988 inspired by the probabilistic description of multifractal geometries, and has been intensively studied during this decade. In the present effort, after introducing some historical background, we briefly describe the formalism, and then exhibit the present status in what concerns theoretical, experimental and computational evidences and connections, as well as some perspectives for the future. In addition to these, here and there we point out various (possibly) relevant questions, whose answer would certainly clarify our current understanding of the foundations of statistical mechanics and its thermodynamical implicationsComment: 15 figure

    Spectral Clustering with Jensen-type kernels and their multi-point extensions

    Full text link
    Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multi-point' kernels, and study their applications. We study a class of kernels based on Jensen type divergences and show that these can be extended to measure similarity among multiple points. We study tensor flattening methods and develop a multi-point (kernel) spectral clustering (MSC) method. We further emphasize on a special case of the proposed kernels, which is a multi-point extension of the linear (dot-product) kernel and show the existence of cubic time tensor flattening algorithm in this case. Finally, we illustrate the usefulness of our contributions using standard data sets and image segmentation tasks.Comment: To appear in IEEE Computer Society Conference on Computer Vision and Pattern Recognitio

    Generalized Statistics Variational Perturbation Approximation using q-Deformed Calculus

    Full text link
    A principled framework to generalize variational perturbation approximations (VPA's) formulated within the ambit of the nonadditive statistics of Tsallis statistics, is introduced. This is accomplished by operating on the terms constituting the perturbation expansion of the generalized free energy (GFE) with a variational procedure formulated using \emph{q-deformed calculus}. A candidate \textit{q-deformed} generalized VPA (GVPA) is derived with the aid of the Hellmann-Feynman theorem. The generalized Bogoliubov inequality for the approximate GFE are derived for the case of canonical probability densities that maximize the Tsallis entropy. Numerical examples demonstrating the application of the \textit{q-deformed} GVPA are presented. The qualitative distinctions between the \textit{q-deformed} GVPA model \textit{vis-\'{a}-vis} prior GVPA models are highlighted.Comment: 26 pages, 4 figure

    Learning from Distributions via Support Measure Machines

    Full text link
    This paper presents a kernel-based discriminative learning framework on probability measures. Rather than relying on large collections of vectorial training examples, our framework learns using a collection of probability distributions that have been constructed to meaningfully represent training data. By representing these probability distributions as mean embeddings in the reproducing kernel Hilbert space (RKHS), we are able to apply many standard kernel-based learning techniques in straightforward fashion. To accomplish this, we construct a generalization of the support vector machine (SVM) called a support measure machine (SMM). Our analyses of SMMs provides several insights into their relationship to traditional SVMs. Based on such insights, we propose a flexible SVM (Flex-SVM) that places different kernel functions on each training example. Experimental results on both synthetic and real-world data demonstrate the effectiveness of our proposed framework.Comment: Advances in Neural Information Processing Systems 2
    corecore