3 research outputs found

    A hybrid constructive algorithm incorporating teaching-learning based optimization for neural network training

    Get PDF
    In neural networks, simultaneous determination of the optimum structure and weights is a challenge. This paper proposes a combination of teaching-learning based optimization (TLBO) algorithm and a constructive algorithm (CA) to cope with the challenge. In literature, TLBO is used to choose proper weights, while CA is adopted to construct different structures in order to select the proper one. In this study, the basic TLBO algorithm along with an improved version of this algorithm for network weights selection are utilized. Meanwhile, as a constructive algorithm, a novel modification to multiple operations, using statistical tests (MOST), is applied and tested to choose the proper structure. The proposed combinatorial algorithms are applied to ten classification problems and two-time-series prediction problems, as the benchmark. The results are evaluated based on training and testing error, network complexity and mean-square error. The experimental results illustrate that the proposed hybrid method of the modified MOST constructive algorithm and the improved TLBO (MCO-ITLBO) algorithm outperform the others; moreover, they have been proven by Wilcoxon statistical tests as well. The proposed method demonstrates less average error with less complexity in the network structure

    Image compression approach for improving deep learning applications

    Get PDF
    In deep learning, dataset plays a main role in training and getting accurate results of detection and recognition objects in an image. Any training model needs a large size of dataset to be more accurate, where improving the dataset size is one of the most research problems that needs enhancement. In this paper, an image compression approach was developed to reduce the dataset size and improve classification accuracy for the trained model using a convolutional neural network (CNN), and speeds up the machine learning process, while maintaining image quality. The results revealed that the best scenario for deep learning models that provided good and acceptable classification accuracy was one that had the following parameters: 80×80 image size, 10 epochs, 64 batch size, 40 images dataset quality (images compressed 60%), and gray image mode. For this scenario a Dog vs Cat dataset is used, and the training time was 48 minutes, classification accuracy was 86%, and images dataset size was 317 MB on storage device. This size makes up 58% of the size of the original image’s dataset, saves 42% of the storage space and reduces the processing resources consumption

    Performance improvement of deep neural network classifiers by a simple training strategy

    No full text
    Improving the classification performance of Deep Neural Networks (DNN) is of primary interest in many different areas of science and technology involving the use of DNN classifiers. In this study, we present a simple training strategy to improve the classification performance of a DNN. In order to attain our goal, we propose to divide the internal parameter space of the DNN into partitions and optimize these partitions individually. We apply our proposed strategy with the popular L-BFGS optimization algorithm even though it can be applied with any optimization algorithm. We evaluate the performance improvement obtained by using our proposed method by testing it on a number of well-known classification benchmark data sets and by performing statistical analysis procedures on classification results. The DNN classifier trained with the proposed strategy is also compared with the state-of-the-art classifiers to demonstrate its effectiveness. Our classification experiments show that the proposed method significantly enhances the training process of the DNN classifier and yields considerable improvements in the accuracy of the classification results. (C) 2017 Elsevier Ltd. All rights reserved
    corecore