4 research outputs found

    An Improved Back Propagation Leaning Algorithm using Second Order Methods with Gain Parameter

    Get PDF
    Back Propagation (BP) algorithm is one of the oldest learning techniques used by artificial neural networks (ANN). It has successfully been implemented in various practical problems. However, the algorithm still faces some drawbacks such as getting easily stuck at local minima and needs longer time to converge on an acceptable solution. Recently, the introduction of Second Order Methods has shown a significant improvement on the learning in BP but it still has some drawbacks such as slow convergence and complexity. To overcome these limitations, this research proposed a modified approach for BP by introducing the Conjugate Gradient and Quasi-Newton which were Second Order methods together with ‘gain’ parameter. The performances of the proposed approach is evaluated in terms of lowest number of epochs, lowest CPU time and highest accuracy on five benchmark classification datasets such as Glass, Horse, 7Bit Parity, Indian Liver Patient and Lung Cancer. The results show that the proposed Second Order methods with ‘gain’ performed better than the BP algorithm

    An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate

    Get PDF
    Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorith

    Multiple Instance Learning for Breast Cancer Magnetic Resonance Imaging

    Get PDF

    Group-based meta-classification

    No full text
    Virtually all existing classification techniques label one sample at a time. In this paper, we highlight the potential benefits of group based classification (GBC), where the classifier labels a group of homogeneous samples. In this way, GBC can take advantage of the additional prior knowledge that all samples belong to the same, unknown, class. We pose GBC in a generic hypothesis testing framework requiring the selection of an appropriate sample and test statistic. We then evaluate one simple example of GBC on both synthetic and real data sets and demonstrate that GBC may be a promising approach in applications where the test data can be arranged into homogenous subsets.</p
    corecore