13,483 research outputs found

    The optimality of attaching unlinked labels to unlinked meanings

    Get PDF
    Vocabulary learning by children can be characterized by many biases. When encountering a new word, children as well as adults, are biased towards assuming that it means something totally different from the words that they already know. To the best of our knowledge, the 1st mathematical proof of the optimality of this bias is presented here. First, it is shown that this bias is a particular case of the maximization of mutual information between words and meanings. Second, the optimality is proven within a more general information theoretic framework where mutual information maximization competes with other information theoretic principles. The bias is a prediction from modern information theory. The relationship between information theoretic principles and the principles of contrast and mutual exclusivity is also shown.Peer ReviewedPostprint (published version

    Evolutionary Neural Gas (ENG): A Model of Self Organizing Network from Input Categorization

    Full text link
    Despite their claimed biological plausibility, most self organizing networks have strict topological constraints and consequently they cannot take into account a wide range of external stimuli. Furthermore their evolution is conditioned by deterministic laws which often are not correlated with the structural parameters and the global status of the network, as it should happen in a real biological system. In nature the environmental inputs are noise affected and fuzzy. Which thing sets the problem to investigate the possibility of emergent behaviour in a not strictly constrained net and subjected to different inputs. It is here presented a new model of Evolutionary Neural Gas (ENG) with any topological constraints, trained by probabilistic laws depending on the local distortion errors and the network dimension. The network is considered as a population of nodes that coexist in an ecosystem sharing local and global resources. Those particular features allow the network to quickly adapt to the environment, according to its dimensions. The ENG model analysis shows that the net evolves as a scale-free graph, and justifies in a deeply physical sense- the term gas here used.Comment: 16 pages, 8 figure

    One-class classifiers based on entropic spanning graphs

    Get PDF
    One-class classifiers offer valuable tools to assess the presence of outliers in data. In this paper, we propose a design methodology for one-class classifiers based on entropic spanning graphs. Our approach takes into account the possibility to process also non-numeric data by means of an embedding procedure. The spanning graph is learned on the embedded input data and the outcoming partition of vertices defines the classifier. The final partition is derived by exploiting a criterion based on mutual information minimization. Here, we compute the mutual information by using a convenient formulation provided in terms of the α\alpha-Jensen difference. Once training is completed, in order to associate a confidence level with the classifier decision, a graph-based fuzzy model is constructed. The fuzzification process is based only on topological information of the vertices of the entropic spanning graph. As such, the proposed one-class classifier is suitable also for data characterized by complex geometric structures. We provide experiments on well-known benchmarks containing both feature vectors and labeled graphs. In addition, we apply the method to the protein solubility recognition problem by considering several representations for the input samples. Experimental results demonstrate the effectiveness and versatility of the proposed method with respect to other state-of-the-art approaches.Comment: Extended and revised version of the paper "One-Class Classification Through Mutual Information Minimization" presented at the 2016 IEEE IJCNN, Vancouver, Canad
    • …
    corecore