397,178 research outputs found

    Distance Measures for Probabilistic Patterns

    Get PDF
    Numerical measures of pattern dissimilarity are at the heart of pattern recognition and classification. Applications of pattern recognition grow more sophisticated every year, and consequently we require distance measures for patterns not easily expressible as feature vectors. Examples include strings, parse trees, time series, random spatial fields, and random graphs [79] [117]. Distance measures are not arbitrary. They can only be effective when they incorporate information about the problem domain; this is a direct consequence of the Ugly Duckling theorem [37]. This thesis poses the question: how can the principles of information theory and statistics guide us in constructing distance measures? In this thesis, I examine distance functions for patterns that are maximum-likelihood model estimates for systems that have random inputs, but are observed noiselessly. In particular, I look at distance measures for histograms, stationary ARMA time series, and discrete hidden Markov models. I show that for maximum likelihood model estimates, the L2 distance involving the information matrix at the most likely model estimate minimizes the type II classification error, for a fixed type I error. I also derive explicit L2 distance measures for ARMA(p, q) time series and discrete hidden Markov models, based on their respective information matrices

    Infotropism as the underlying principle of perceptual organization

    Get PDF
    Whether perceptual organization favors the simplest or most likely interpretation of a distal stimulus has long been debated. An unbridgeable gulf has seemed to separate these, the Gestalt and Helmholtzian viewpoints. But in recent decades, the proposal that likelihood and simplicity are two sides of the same coin has been gaining ground, to the extent that their equivalence is now widely assumed. What then arises is a desire to know whether the two principles can be reduced to one. Applying Occam's Razor in this way is particularly desirable given that, as things stand, an account referencing one principle alone cannot be completely satisfactory. The present paper argues that unification of the two principles is possible, and that it can be achieved in terms of an incremental notion of `information seeking' (infotropism). Perceptual processing that is infotropic can be shown to target both simplicity and likelihood. The ability to see perceptual organization as governed by either objective can then be explained in terms of it being an infotropic process. Infotropism can be identified as the principle which underlies, and thus generalizes the principles of likelihood and simplicity

    Multiscale Discriminant Saliency for Visual Attention

    Full text link
    The bottom-up saliency, an early stage of humans' visual attention, can be considered as a binary classification problem between center and surround classes. Discriminant power of features for the classification is measured as mutual information between features and two classes distribution. The estimated discrepancy of two feature classes very much depends on considered scale levels; then, multi-scale structure and discriminant power are integrated by employing discrete wavelet features and Hidden markov tree (HMT). With wavelet coefficients and Hidden Markov Tree parameters, quad-tree like label structures are constructed and utilized in maximum a posterior probability (MAP) of hidden class variables at corresponding dyadic sub-squares. Then, saliency value for each dyadic square at each scale level is computed with discriminant power principle and the MAP. Finally, across multiple scales is integrated the final saliency map by an information maximization rule. Both standard quantitative tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed multiscale discriminant saliency method (MDIS) against the well-know information-based saliency method AIM on its Bruce Database wity eye-tracking data. Simulation results are presented and analyzed to verify the validity of MDIS as well as point out its disadvantages for further research direction.Comment: 16 pages, ICCSA 2013 - BIOCA sessio

    Bayesian Inference in Processing Experimental Data: Principles and Basic Applications

    Full text link
    This report introduces general ideas and some basic methods of the Bayesian probability theory applied to physics measurements. Our aim is to make the reader familiar, through examples rather than rigorous formalism, with concepts such as: model comparison (including the automatic Ockham's Razor filter provided by the Bayesian approach); parametric inference; quantification of the uncertainty about the value of physical quantities, also taking into account systematic effects; role of marginalization; posterior characterization; predictive distributions; hierarchical modelling and hyperparameters; Gaussian approximation of the posterior and recovery of conventional methods, especially maximum likelihood and chi-square fits under well defined conditions; conjugate priors, transformation invariance and maximum entropy motivated priors; Monte Carlo estimates of expectation, including a short introduction to Markov Chain Monte Carlo methods.Comment: 40 pages, 2 figures, invited paper for Reports on Progress in Physic

    MaxEnt assisted MaxLik tomography

    Full text link
    Maximum likelihood estimation is a valuable tool often applied to inverse problems in quantum theory. Estimation from small data sets can, however, have non unique solutions. We discuss this problem and propose to use Jaynes maximum entropy principle to single out the most unbiased maximum-likelihood guess.Comment: 10 pages, 5 figures, presented at MaxEnt conference in Jackson, WY, 200

    Multi-scale Discriminant Saliency with Wavelet-based Hidden Markov Tree Modelling

    Full text link
    The bottom-up saliency, an early stage of humans' visual attention, can be considered as a binary classification problem between centre and surround classes. Discriminant power of features for the classification is measured as mutual information between distributions of image features and corresponding classes . As the estimated discrepancy very much depends on considered scale level, multi-scale structure and discriminant power are integrated by employing discrete wavelet features and Hidden Markov Tree (HMT). With wavelet coefficients and Hidden Markov Tree parameters, quad-tree like label structures are constructed and utilized in maximum a posterior probability (MAP) of hidden class variables at corresponding dyadic sub-squares. Then, a saliency value for each square block at each scale level is computed with discriminant power principle. Finally, across multiple scales is integrated the final saliency map by an information maximization rule. Both standard quantitative tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed multi-scale discriminant saliency (MDIS) method against the well-know information based approach AIM on its released image collection with eye-tracking data. Simulation results are presented and analysed to verify the validity of MDIS as well as point out its limitation for further research direction.Comment: arXiv admin note: substantial text overlap with arXiv:1301.396

    Some procedures for computerized ability testing

    Get PDF
    For computerized test systems to be operational, the use of item response theory is a prerequisite. As opposed to classical test theory, in item response models the abilities of the examinees and the properties of the items are parameterized separately. Hence, when measuring the abilities of examinees, the model implicitly corrects for the item properties, and measurement on an item-independent scale is possible. In addition, item response theory offers the use of test and item information as local reliability indices defined on the ability scale. In this chapter, it is shown how the main features of item response theory have given rise to the development of promising procedures for computerized testing. Among the topics discussed are procedures for item bank calibration, automated test construction, adaptive test administration, generating norm distributions, and diagnosing test scores
    • …
    corecore