19 research outputs found

    Nonparametric “anti-Bayesian” quantile-based pattern classification

    Get PDF
    Author's accepted manuscript.Available from 24/06/2021.This is a post-peer-review, pre-copyedit version of an article published in Pattern Analysis and Applications. The final authenticated version is available online at: https://doi.org/10.1007/s10044-020-00903-7.acceptedVersio

    On achieving near-optimal “Anti-Bayesian” Order Statistics-Based classification fora asymmetric exponential distributions

    Get PDF
    This paper considers the use of Order Statistics (OS) in the theory of Pattern Recognition (PR). The pioneering work on using OS for classification was presented in [1] for the Uniform distribution, where it was shown that optimal PR can be achieved in a counter-intuitive manner, diametrically opposed to the Bayesian paradigm, i.e., by comparing the testing sample to a few samples distant from the mean - which is distinct from the optimal Bayesian paradigm. In [2], we showed that the results could be extended for a few symmetric distributions within the exponential family. In this paper, we attempt to extend these results significantly by considering asymmetric distributions within the exponential family, for some of which even the closed form expressions of the cumulative distribution functions are not available. These distributions include the Rayleigh, Gamma and certain Beta distributions. As in [1] and [2], the new scheme, referred to as Classification by Moments of Order Statistics (CMOS), attains an accuracy very close to the optimal Bayes’ bound, as has been shown both theoretically and by rigorous experimental testing

    Ultimate Order Statistics-Based Prototype Reduction Schemes

    Get PDF
    The objective of Prototype Reduction Schemes (PRSs) and Border Identification (BI) algorithms is to reduce the number of training vectors, while simultaneously attempting to guarantee that the classifier built on the reduced design set performs as well, or nearly as well, as the classifier built on the original design set. In this paper, we shall push the limit on the field of PRSs to see if we can obtain a classification accuracy comparable to the optimal, by condensing the information in the data set into a single training point. We, indeed, demonstrate that such PRSs exist and are attainable, and show that the design and implementation of such schemes work with the recently-introduced paradigm of Order Statistics (OS)-based classifiers. These classifiers, referred to as Classification by Moments of Order Statistics (CMOS) is essentially anti-Bayesian in its modus operandus. In this paper, we demonstrate the power and potential of CMOS to yield single-element PRSs which are either “selective” or “creative”, where in each case we resort to a non-parametric or a parametric paradigm respectively. We also report a single-feature single-element creative PRS. All of these solutions have been used to achieve classification for real-life data sets from the UCI Machine Learning Repository, where we have followed an approach that is similar to the Naïve-Bayes’ (NB) strategy although it is essentially of an anti-Naïve-Bayes’ paradigm. The amazing facet of this approach is that the training set can be reduced to a single pattern from each of the classes which is, in turn, determined by the CMOS features. It is even more fascinating to see that the scheme can be rendered operational by using the information in a single feature of such a single data point. In each of these cases, the accuracy of the proposed PRS-based approach is very close to the optimal Bayes’ bound and is almost comparable to that of the SVM

    “Anti-Bayesian” Flat and Hierarchical Clustering Using Symmetric Quantiloids

    Get PDF
    acceptedVersionPaid Open Acces

    Optimal Neural Codes for Natural Stimuli

    Get PDF
    The efficient coding hypothesis assumes that biological sensory systems use neural codes that are optimized to best possibly represent the stimuli that occur in their environment. When formulating such optimization problem of neural codes, two key components must be considered. The first is what types of constraints the neural codes must satisfy? The second is the objective function itself -- what is the goal of the neural codes? We seek to provide a systematic framework to address these types of problem. Previous work often assume one specific set of constraint and analytically or numerically solve the optimization problem. Here we want to put everything in a unified framework and show that these results can be understood from a much more generalized perspective. In particular, we provide analytical solutions for a variety of neural noise models and two types of constraint: a range constraint which specifies the max/min neural activity and a metabolic constraint which upper bounds the mean neural activity. In terms of objective functions, most common models rely on information theoretic measures, whereas alternative formulations propose incorporating downstream decoding performance. We systematically evaluate different optimality criteria based upon the LpL_p reconstruction error of the maximum likelihood decoder. This parametric family of optimal criteria includes special cases such as the information maximization criterion and the mean squared loss minimization of decoding error. We analytically derive the optimal tuning curve of a single neuron in terms of the reconstruction error norm pp to encode natural stimuli with an arbitrary input distribution. Under our framework, we can try to answer questions such as what is the objective function the neural code is actually using? Under what constraints can the predicted results provide a better fit for the actual data? Using different combination of objective function and constraints, we tested our analytical predictions against previously measured characteristics of some early visual systems found in biology. We find solutions under the metabolic constraint and low values of pp provides a better fit for physiology data on early visual perception systems

    é€†ăƒ™ă‚€ă‚șæŽšćźšă‚’ç”šă„ăŸæ„æ€æ±șćźšăƒ—ăƒ­ă‚»ă‚čăźăƒąăƒ‡ăƒ«ćŒ–

    Get PDF
    æ—©ć€§ć­Šäœèš˜ç•Șć·:新7962早çšČ田性

    Generalized information theory meets human cognition: Introducing a unified framework to model uncertainty and information search

    Get PDF
    Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, RĂ©nyi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism

    Generalized information theory meets human cognition: Introducing a unified framework to model uncertainty and information search

    Get PDF
    Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, RĂ©nyi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism
    corecore