309 research outputs found
Efficient Calculation of the Gauss-Newton Approximation of the Hessian Matrix in Neural Networks
The Levenberg-Marquardt (LM) learning algorithm is a popular algorithm for training neural networks; however, for large neural networks, it becomes prohibitively expensive in terms of running time and memory requirements. The most time-critical step of the algorithm is the calculation of the Gauss-Newton matrix, which is formed by multiplying two large Jacobian matrices together. We propose a method that uses back-propagation to reduce the time of this matrix-matrix multiplication. This reduces the overall asymptotic running time of the LM algorithm by a factor of the order of the number of output nodes in the neural network
Recommended from our members
Parallel Trajectory Training of Recurrent Neural Network Controllers with Levenberg–Marquardt and Forward Accumulation Through Time in Closed-loop Control Systems
This paper introduces a novel parallel trajectory mechanism that combines Levenberg-Marquardt and Forward Accumulation Through Time algorithms to train a recurrent neural network controller in a closed-loop control system by distributing the calculation of trajectories across Central Processing Unit (CPU) cores/workers depending on the computing platforms, computing program languages, and software packages available. Without loss of generality, the recurrent neural network controller of a grid-connected converter for solar integration to a power system was selected as the benchmark test closed-loop control system. Two software packages were developed in Matlab and C++ to verify and demonstrate the efficiency of the proposed parallel training method. The training of the deep neural network controller was migrated from a single workstation to both cloud computing platforms and High-Performance Computing clusters. The training results show excellent speed-up performance, which significantly reduces the training time for a large number of trajectories with high sampling frequency, and further demonstrates the effectiveness and scalability of the proposed parallel mechanism
Approaches for MATLAB Applications Acceleration Using High Performance Reconfigurable Computers
A lot of raw computing power is needed in many scientific computing applications and simulations. MATLAB®† is one of the popular choices as a language for technical computing. Presented here are approaches for MATLAB based applications acceleration using High Performance Reconfigurable Computing (HPRC) machines. Typically, these are a cluster of Von Neumann architecture based systems with none or more FPGA reconfigurable boards. As a case study, an Image Correlation Algorithm has been ported on this architecture platform. As a second case study, the recursive training process in an Artificial Neural Network (ANN) to realize an optimum network has been accelerated, by porting it to HPC Systems. The approaches taken are analyzed with respect to target scenarios, end users perspective, programming efficiency and performance. Disclaimer: Some material in this text has been used and reproduced with appropriate references and permissions where required. † MATLAB® is a registered trademark of The Mathworks, Inc. ©1994-2003
Recommended from our members
Optimisation of a water company’s waste pumping asset base with a focus on energy reduction
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonWater companies use a significant quantity of electricity for the operation of their clean and wastewater assets. Rising energy prices have led to higher energy bills within the water companies, which has increased operating costs. Thus, improvements in demand side energy management are needed to increase efficiency and reduce costs, which forms the premise for this research project.
Thames Water Utilities Ltd has identified that improvements in demand side energy management is required and is currently researching various methods to reduce energy consumption. One initiative included the upgrade of a variety of site telemetry assets. By deploying these new telemetry assets, Thames Water Utilities Ltd are more able to liberate the asset data and as such, be able to make informed decisions on how better to control and optimise the target sites, which is where this research project has seen further opportunities. This enhanced telemetry and SCADA infrastructure will enable successful research to further develop an intelligent integrated system that tackles pump scheduling and process control with the emphasis on energy management.
The use of modern techniques, such as artificial intelligence, to optimise the network operation is gradually gaining traction. The balance between implementing new technology (with the benefits it may bring) and reluctance to change from the incumbent operating model will always provide challenges in the technology adoption agenda.
The main work of this research project included the physical surveying of a wastewater hydraulic catchment, inclusive of all wet well dimensions, lidar overlays, and pump electrical power characteristics. These survey results where then able to be programmed by the research into the company’s' hydraulic model to enable a higher degree of accuracy in the modelling, as well as enabling electrical power as a measurable output. From here, the model was then able to be optimised, focussing on electrical energy as an output variable for reduction.
The research concluded that electrical energy consumption over time can be reduced using the aforementioned strategies and as such recommends further work to move from the model environment to physical architecture. It does so with the key message that risk tolerances on water levels must be pre-agreed with hydraulic specialists prior to deployment
An efficient and effective convolutional neural network for visual pattern recognition
Convolutional neural networks (CNNs) are a variant of deep neural networks (DNNs) optimized for visual pattern recognition, which are typically trained using first order learning algorithms, particularly stochastic gradient descent (SGD). Training deeper CNNs (deep learning) using large data sets (big data) has led to the concept of distributed machine learning (ML), contributing to state-of-the-art performances in solving computer vision problems. However, there are still several outstanding issues to be resolved with currently defined models and learning algorithms. Propagations through a convolutional layer require flipping of kernel weights, thus increasing the computation time of a CNN. Sigmoidal activation functions suffer from gradient diffusion problem that degrades training efficiency, while others cause numerical instability due to unbounded outputs. Common learning algorithms converge slowly and are prone to hyperparameter overfitting problem. To date, most distributed learning algorithms are still based on first order methods that are susceptible to various learning issues. This thesis presents an efficient CNN model, proposes an effective learning algorithm to train CNNs, and map it into parallel and distributed computing platforms for improved training speedup. The proposed CNN consists of convolutional layers with correlation filtering, and uses novel bounded activation functions for faster performance (up to 1.36x), improved learning performance (up to 74.99% better), and better training stability (up to 100% improvement). The bounded stochastic diagonal Levenberg-Marquardt (B-SDLM) learning algorithm is proposed to encourage fast convergence (up to 5.30% faster and 35.83% better than first order methods) while having only a single hyperparameter. B-SDLM also supports mini-batch learning mode for high parallelism. Based on known previous works, this is among the first successful attempts of mapping a stochastic second order learning algorithm to be deployed in distributed ML platforms. Running the distributed B-SDLM on a 16- core cluster achieves up to 12.08x and 8.72x faster to reach a certain convergence state and accuracy on the Mixed National Institute of Standards and Technology (MNIST) data set. All three complex case studies tested with the proposed algorithms give comparable or better classification accuracies compared to those provided in previous works, but with better efficiency. As an example, the proposed solutions achieved 99.14% classification accuracy for the MNIST case study, and 100% for face recognition using AR Purdue data set, which proves the feasibility of proposed algorithms in visual pattern recognition tasks
- …