305 research outputs found

    An efficient and effective convolutional neural network for visual pattern recognition

    Get PDF
    Convolutional neural networks (CNNs) are a variant of deep neural networks (DNNs) optimized for visual pattern recognition, which are typically trained using first order learning algorithms, particularly stochastic gradient descent (SGD). Training deeper CNNs (deep learning) using large data sets (big data) has led to the concept of distributed machine learning (ML), contributing to state-of-the-art performances in solving computer vision problems. However, there are still several outstanding issues to be resolved with currently defined models and learning algorithms. Propagations through a convolutional layer require flipping of kernel weights, thus increasing the computation time of a CNN. Sigmoidal activation functions suffer from gradient diffusion problem that degrades training efficiency, while others cause numerical instability due to unbounded outputs. Common learning algorithms converge slowly and are prone to hyperparameter overfitting problem. To date, most distributed learning algorithms are still based on first order methods that are susceptible to various learning issues. This thesis presents an efficient CNN model, proposes an effective learning algorithm to train CNNs, and map it into parallel and distributed computing platforms for improved training speedup. The proposed CNN consists of convolutional layers with correlation filtering, and uses novel bounded activation functions for faster performance (up to 1.36x), improved learning performance (up to 74.99% better), and better training stability (up to 100% improvement). The bounded stochastic diagonal Levenberg-Marquardt (B-SDLM) learning algorithm is proposed to encourage fast convergence (up to 5.30% faster and 35.83% better than first order methods) while having only a single hyperparameter. B-SDLM also supports mini-batch learning mode for high parallelism. Based on known previous works, this is among the first successful attempts of mapping a stochastic second order learning algorithm to be deployed in distributed ML platforms. Running the distributed B-SDLM on a 16- core cluster achieves up to 12.08x and 8.72x faster to reach a certain convergence state and accuracy on the Mixed National Institute of Standards and Technology (MNIST) data set. All three complex case studies tested with the proposed algorithms give comparable or better classification accuracies compared to those provided in previous works, but with better efficiency. As an example, the proposed solutions achieved 99.14% classification accuracy for the MNIST case study, and 100% for face recognition using AR Purdue data set, which proves the feasibility of proposed algorithms in visual pattern recognition tasks

    Active disturbance cancellation in nonlinear dynamical systems using neural networks

    Get PDF
    A proposal for the use of a time delay CMAC neural network for disturbance cancellation in nonlinear dynamical systems is presented. Appropriate modifications to the CMAC training algorithm are derived which allow convergent adaptation for a variety of secondary signal paths. Analytical bounds on the maximum learning gain are presented which guarantee convergence of the algorithm and provide insight into the necessary reduction in learning gain as a function of the system parameters. Effectiveness of the algorithm is evaluated through mathematical analysis, simulation studies, and experimental application of the technique on an acoustic duct laboratory model

    Neural Networks in Mechanical System Simulation, Identification, and Assessment

    Get PDF
    corecore