239,812 research outputs found

    The Geometry of Stimulus Control

    Get PDF
    Many studies, both in ethology and comparative psychology, have shown that animals react to modifications of familiar stimuli. This phenomenon is often referred to as generalisation. Most modifications lead to a decrease in responding, but to certain new stimuli an increase in responding is observed. This holds for both innate and learned behaviour. Here we propose a heuristic approach to stimulus control, or stimulus selection, with the aim of explaining these phenomena. The model has two key elements. First, we choose the receptor level as the fundamental stimulus space. Each stimulus is represented as the pattern of activation it induces in sense organs. Second, in this space we introduce a simple measure of `similarity' between stimuli by calculating how activation patterns overlap. The main advantage we recognise in this approach is that the generalisation of acquired responses emerges from a few simple principles which are grounded in the recognition of how animals actually perceive stimuli. Many traditional problems that face theories of stimulus control (e.g. the Spence-Hull theory of gradient interaction or ethological theories of stimulus summation) do not arise in the present framework. These problems include the amount of generalisation along different dimensions, peak-shift phenomena (with respect to both positive and negative shifts), intensity generalisation, and generalisation after conditioning on two positive stimuli

    Factors for the Generalisation of Identity Relations by Neural Networks

    Get PDF
    Many researchers implicitly assume that neural networks learn relations and generalise them to new unseen data. It has been shown recently, however, that the generalisation of feed-forward networks fails for identity relations.The proposed solution for this problem is to create an inductive bias with Differential Rectifier (DR) units. In this work we explore various factors in the neural network architecture and learning process whether they make a difference to the generalisation on equality detection of Neural Networks without and and with DR units in early and mid fusion architectures. We find in experiments with synthetic data effects of the number of hidden layers, the activation function and the data representation. The training set size in relation to the total possible set of vectors also makes a difference. However, the accuracy never exceeds 61% without DR units at 50% chance level. DR units improve generalisation in all tasks and lead to almost perfect test accuracy in the Mid Fusion setting. Thus, DR units seem to be a promising approach for creating generalisation abilities that standard networks lack
    corecore