6,526 research outputs found

    An efficient initialization scheme for the self-organizing feature map algorithm

    Get PDF
    [[abstract]]It is often reported in the technique literature that the success of the self-organizing feature map formation is critically dependent on the initial weights and the selection of main parameters of the algorithm, namely, the learning-rate parameter and the neighborhood function. In this paper, we propose an efficient initialization scheme to construct an initial map. We then use the self-organizing feature map algorithm to make small subsequent adjustments so as to improve the accuracy of the initial map. Two data sets are tested to illustrate the performance of the proposed method.[[conferencetype]]國際[[conferencedate]]19990710~19990716[[booktype]]紙本[[conferencelocation]]Washington, DC, US

    Improving the self-organizing feature map algorithm using an efficient initialization scheme

    Get PDF
    [[abstract]]It is often reported in the technique literature that the success of the self-organizing feature map formation is critically dependent on the initial weights and the selection of main parameters (i.e. the learning-rate parameter and the neighborhood set) of the algorithm. They usually have to be counteracted by the trial-and-error method; therefore, often time consuming retraining procedures have to precede before a neighborhood preserving feature amp is obtained. In this paper, we propose an efficient initialization scheme to construct an initial map. We then use the self-organizing feature map algorithm to make small subsequent adjustments so as to improve the accuracy of the initial map. Several data sets are tested to illustrate the performance of the proposed method.[[notice]]補正完

    An Adaptive Locally Connected Neuron Model: Focusing Neuron

    Full text link
    This paper presents a new artificial neuron model capable of learning its receptive field in the topological domain of inputs. The model provides adaptive and differentiable local connectivity (plasticity) applicable to any domain. It requires no other tool than the backpropagation algorithm to learn its parameters which control the receptive field locations and apertures. This research explores whether this ability makes the neuron focus on informative inputs and yields any advantage over fully connected neurons. The experiments include tests of focusing neuron networks of one or two hidden layers on synthetic and well-known image recognition data sets. The results demonstrated that the focusing neurons can move their receptive fields towards more informative inputs. In the simple two-hidden layer networks, the focusing layers outperformed the dense layers in the classification of the 2D spatial data sets. Moreover, the focusing networks performed better than the dense networks even when 70%\% of the weights were pruned. The tests on convolutional networks revealed that using focusing layers instead of dense layers for the classification of convolutional features may work better in some data sets.Comment: 45 pages, a national patent filed, submitted to Turkish Patent Office, No: -2017/17601, Date: 09.11.201
    • …
    corecore