151,966 research outputs found

    Modelling Identity Rules with Neural Networks

    Get PDF
    In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstract patterns based on identity rules. We propose Repetition Based Pattern (RBP) extensions to neural network structures that solve this problem and answer, as well as raise, questions about integrating structures for inductive bias into neural networks. Examples of abstract patterns are the sequence patterns ABA and ABB where A or B can be any object. These were introduced by Marcus et al (1999) who also found that 7 month old infants recognise these patterns in sequences that use an unfamiliar vocabulary while simple recurrent neural networks do not. This result has been contested in the literature but it is confirmed by our experiments. We also show that the inability to generalise extends to different, previously untested, settings. We propose a new approach to modify standard neural network architectures, called Repetition Based Patterns (RBP) with different variants for classification and prediction. Our experiments show that neural networks with the appropriate RBP structure achieve perfect classification and prediction performance on synthetic data, including mixed concrete and abstract patterns. RBP also improves neural network performance in experiments with real-world sequence prediction tasks. We discuss these finding in terms of challenges for neural network models and identify consequences from this result in terms of developing inductive biases for neural network learning

    Enhancement of speed and efficiency of an Internet based gear design optimisation

    Get PDF
    An internet-based gear design optimisation program has been developed for geographically dispersed teams to collaborate over the internet. The optimisation program implements genetic algorithm. A novel methodology is presented that improves the speed of execution of the optimisation program by integrating artificial neural networks into the system. The paper also proposes a method that allows an improvement to the performance of the back propagation-learning algorithm. This is done by rescaling the output data patterns to lie slightly below and above the two extreme values of the full range neural activation function. Experimental tests show the reduction of execution time by approximately 50%, as well as an improvement in the training and generalisation errors and the rate of learning of the network
    • …
    corecore