295 research outputs found

    An algorithm for learning from hints

    Get PDF
    To take advantage of prior knowledge (hints) about the function one wants to learn, we introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated. All hints are represented to the learning process by examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis of each hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique

    Maximal codeword lengths in Huffman codes

    Get PDF
    The following question about Huffman coding, which is an important technique for compressing data from a discrete source, is considered. If p is the smallest source probability, how long, in terms of p, can the longest Huffman codeword be? It is shown that if p is in the range 0 less than p less than or equal to 1/2, and if K is the unique index such that 1/F(sub K+3) less than p less than or equal to 1/F(sub K+2), where F(sub K) denotes the Kth Fibonacci number, then the longest Huffman codeword for a source whose least probability is p is at most K, and no better bound is possible. Asymptotically, this implies the surprising fact that for small values of p, a Huffman code's longest codeword can be as much as 44 percent larger than that of the corresponding Shannon code

    Maximum Resilience of Artificial Neural Networks

    Full text link
    The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximal amount of input or sensor perturbation which is still tolerated. This problem of computing maximal perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximal resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.Comment: Timestamp research work conducted in the project. version 2: fix some typos, rephrase the definition, and add some more existing wor

    Deferring the learning for better generalization in radial basis neural networks

    Get PDF
    Proceeding of: International Conference Artificial Neural Networks — ICANN 2001. Vienna, Austria, August 21–25, 2001The level of generalization of neural networks is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the most appropriate training patterns to the new sample to be predicted. The proposed method has been applied to Radial Basis Neural Networks, whose generalization capability is usually very poor. The learning strategy slows down the response of the network in the generalisation phase. However, this does not introduces a significance limitation in the application of the method because of the fast training of Radial Basis Neural Networks

    Data-driven-based vector space decomposition modeling of multiphase induction machines

    Get PDF
    For contemporary variable-speed electric drives, the accuracy of the machine's mathematical model is critical for optimal control performance. Basically, phase variables of multiphase machines are preferably decomposed into multiple orthogonal subspaces based on vector space decomposition (VSD). In the available literature, identifying the correlation between states governed by the dynamic equations and the parameter estimate of different subspaces of multiphase IM remains scarce, especially under unbalanced conditions, where the effect of secondary subspaces sounds influential. Most available literature has relied on simple RL circuit representation to model these secondary subspaces. To this end, this paper presents an effective data-driven-based space harmonic model for n-phase IMs using sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover the IM governing equations. Moreover, the proposed approach is computationally efficient, and it precisely identifies both the electrical and mechanical dynamics of all subspaces of an IM using a single transient startup run. Additionally, the derived model can be reformulated into the standard canonical form of the induction machine model to easily extract the parameters of all subspaces based on online measurements. Eventually, the proposed modeling approach is experimentally validated using a 1.5 Hp asymmetrical six-phase induction machine
    corecore