328 research outputs found

    An algorithm for learning from hints

    Get PDF
    To take advantage of prior knowledge (hints) about the function one wants to learn, we introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated. All hints are represented to the learning process by examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis of each hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique

    Maximal codeword lengths in Huffman codes

    Get PDF
    The following question about Huffman coding, which is an important technique for compressing data from a discrete source, is considered. If p is the smallest source probability, how long, in terms of p, can the longest Huffman codeword be? It is shown that if p is in the range 0 less than p less than or equal to 1/2, and if K is the unique index such that 1/F(sub K+3) less than p less than or equal to 1/F(sub K+2), where F(sub K) denotes the Kth Fibonacci number, then the longest Huffman codeword for a source whose least probability is p is at most K, and no better bound is possible. Asymptotically, this implies the surprising fact that for small values of p, a Huffman code's longest codeword can be as much as 44 percent larger than that of the corresponding Shannon code

    Diversity and Specialization in Collaborative Swarm Systems

    Get PDF
    This paper addresses qualitative and quantitative diversity and specialization issues in the frame- work of self-organizing, distributed, artificial systems. Both diversity and specialization are obtained via distributed learning from initially homogeneous swarms. While measuring diversity essentially quantifies differences among the individuals, assessing the degree of specialization implies to correlate the swarm’s heterogeneity with its overall performance. Starting from a stick-pulling experiment in collective robotics, a task that requires the collaboration of two robots, we abstract and generalize in simulation the task constraints to k robots collaborating sequentially or in parallel. We investi- gate quantitatively the influence of task constraints and type of reinforcement signals on diversity and specialization in these collaborative experiments. Results show that, though diversity is not explicitly rewarded in our learning algorithm and there is no explicit communication among agents, the swarm becomes specialized after learning. The degree of specialization is affected strongly by environmental conditions and task constraints, and reveals characteristics related to performance and learning in a more consistent and clearer way than diversity does

    Maximum Resilience of Artificial Neural Networks

    Full text link
    The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximal amount of input or sensor perturbation which is still tolerated. This problem of computing maximal perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximal resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.Comment: Timestamp research work conducted in the project. version 2: fix some typos, rephrase the definition, and add some more existing wor

    Deferring the learning for better generalization in radial basis neural networks

    Get PDF
    Proceeding of: International Conference Artificial Neural Networks — ICANN 2001. Vienna, Austria, August 21–25, 2001The level of generalization of neural networks is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the most appropriate training patterns to the new sample to be predicted. The proposed method has been applied to Radial Basis Neural Networks, whose generalization capability is usually very poor. The learning strategy slows down the response of the network in the generalisation phase. However, this does not introduces a significance limitation in the application of the method because of the fast training of Radial Basis Neural Networks

    Learning and Measuring Specialization in Collaborative Swarm Systems

    Get PDF
    This paper addresses qualitative and quantitative diversity and specialization issues in the framework of selforganizing, distributed, artificial systems. Both diversity and specialization are obtained via distributed learning from initially homogeneous swarms. While measuring diversity essentially quantifies differences among the individuals, assessing the degree of specialization implies correlation between the swarm’s heterogeneity with its overall performance. Starting from the stick-pulling experiment in collective robotics, a task that requires the collaboration of two robots, we abstract and generalize in simulation the task constraints to k robots collaborating sequentially or in parallel. We investigate quantitatively the influence of task constraints and types of reinforcement signals on performance, diversity, and specialization in these collaborative experiments. Results show that, though diversity is not explicitly rewarded in our learning algorithm, even in scenarios without explicit communication among agents the swarm becomes specialized after learning. The degrees of both diversity and specialization are affected strongly by environmental conditions and task constraints. While the specialization measure reveals characteristics related to performance and learning in a clearer way than diversity does, the latter measure appears to be less sensitive to different noise conditions and learning parameters

    Information capacity of the Hopfield model

    Full text link

    On the K-Winners-Take-All Network

    Get PDF
    We present and rigorously analyze a generalization of the Winner-Take-All Network: the K-Winners-Take-All Network. This network identifies the K largest of a set of N real numbers. The network model used is the continuous Hopfield model
    • …
    corecore