1,574 research outputs found
New acceleration technique for the backpropagation algorithm
Artificial neural networks have been studied for many years in the hope of achieving human like performance in the area of pattern recognition, speech synthesis and higher level of cognitive process. In the connectionist model there are several interconnected processing elements called the neurons that have limited processing capability. Even though the rate of information transmitted between these elements is limited, the complex interconnection and the cooperative interaction between these elements results in a vastly increased computing power; The neural network models are specified by an organized network topology of interconnected neurons. These networks have to be trained in order them to be used for a specific purpose. Backpropagation is one of the popular methods of training the neural networks. There has been a lot of improvement over the speed of convergence of standard backpropagation algorithm in the recent past. Herein we have presented a new technique for accelerating the existing backpropagation without modifying it. We have used the fourth order interpolation method for the dominant eigen values, by using these we change the slope of the activation function. And by doing so we increase the speed of convergence of the backpropagation algorithm; Our experiments have shown significant improvement in the convergence time for problems widely used in benchmarKing Three to ten fold decrease in convergence time is achieved. Convergence time decreases as the complexity of the problem increases. The technique adjusts the energy state of the system so as to escape from local minima
Multilayer optical learning networks
A new approach to learning in a multilayer optical neural network based on holographically interconnected nonlinear devices is presented. The proposed network can learn the interconnections that form a distributed representation of a desired pattern transformation operation. The interconnections are formed in an adaptive and self-aligning fashioias volume holographic gratings in photorefractive crystals. Parallel arrays of globally space-integrated inner products diffracted by the interconnecting hologram illuminate arrays of nonlinear Fabry-Perot etalons for fast thresholding of the transformed patterns. A phase conjugated reference wave interferes with a backward propagating error signal to form holographic interference patterns which are time integrated in the volume of a photorefractive crystal to modify slowly and learn the appropriate self-aligning interconnections. This multilayer system performs an approximate implementation of the backpropagation learning procedure in a massively parallel high-speed nonlinear optical network
Causative factors of construction and demolition waste generation in Iraq Construction Industry
The construction industry has hurt the environment from the waste generated during
construction activities. Thus, it calls for serious measures to determine the causative
factors of construction waste generated. There are limited studies on factors causing
construction, and demolition (C&D) waste generation, and these limited studies only
focused on the quantification of construction waste. This study took the opportunity to
identify the causative factors for the C&D waste generation and also to determine the
risk level of each causal factor, and the most important minimization methods to
avoiding generating waste. This study was carried out based on the quantitative
approach. A total of 39 factors that causes construction waste generation that has been
identified from the literature review were considered which were then clustered into 4
groups. Improved questionnaire surveys by 38 construction experts (consultants,
contractors and clients) during the pilot study. The actual survey was conducted with
a total of 380 questionnaires, received with a response rate of 83.3%. Data analysis
was performed using SPSS software. Ranking analysis using the mean score approach
found the five most significant causative factors which are poor site management, poor
planning, lack of experience, rework and poor controlling. The result also indicated
that the majority of the identified factors having a high-risk level, in addition, the better
minimization method is environmental awareness. A structural model was developed
based on the 4 groups of causative factors using the Partial Least Squared-Structural
Equation Modelling (PLS-SEM) technique. It was found that the model fits due to the
goodness of fit (GOF ≥ 0.36= 0.658, substantial). Based on the outcome of this study,
39 factors were relevant to the generation of construction and demolition waste in Iraq.
These groups of factors should be avoided during construction works to reduce the
waste generated. The findings of this study are helpful to authorities and stakeholders
in formulating laws and regulations. Furthermore, it provides opportunities for future
researchers to conduct additional research’s on the factors that contribute to
construction waste generation
Fixed-Point Performance Analysis of Recurrent Neural Networks
Recurrent neural networks have shown excellent performance in many
applications, however they require increased complexity in hardware or software
based implementations. The hardware complexity can be much lowered by
minimizing the word-length of weights and signals. This work analyzes the
fixed-point performance of recurrent neural networks using a retrain based
quantization method. The quantization sensitivity of each layer in RNNs is
studied, and the overall fixed-point optimization results minimizing the
capacity of weights while not sacrificing the performance are presented. A
language model and a phoneme recognition examples are used
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
Adaptive Moment Estimation To Minimize Square Error In Backpropagation Algorithm
Back - propagation Neural Network has weaknesses such as errors of gradient descent training slowly of error function, training time is too long and is easy to fall into local optimum. Back - propagation algorithm is one of the artificial neural network training algorithm that has weaknesses such as the convergence of long, over-fitting and easy to get stuck in local optima. Back - propagation is used to minimize errors in each iteration. This paper investigates and evaluates the performance of Adaptive Moment Estimation (ADAM) to minimize the squared error in back - propagation gradient descent algorithm. Adaptive Estimation moment can speed up the training and achieve the level of acceleration to get linear. ADAM can adapt to changes in the system, and can optimize many parameters with a low calculation. The results of the study indicate that the performance of adaptive moment estimation can minimize the squared error in the output of neural networks
PROPOSED METHODOLOGY FOR OPTIMIZING THE TRAINING PARAMETERS OF A MULTILAYER FEED-FORWARD ARTIFICIAL NEURAL NETWORKS USING A GENETIC ALGORITHM
An artificial neural network (ANN), or shortly "neural network" (NN), is a powerful
mathematical or computational model that is inspired by the structure and/or
functional characteristics of biological neural networks. Despite the fact that ANN has
been developing rapidly for many years, there are still some challenges concerning
the development of an ANN model that performs effectively for the problem at hand.
ANN can be categorized into three main types: single layer, recurrent network and
multilayer feed-forward network. In multilayer feed-forward ANN, the actual
performance is highly dependent on the selection of architecture and training
parameters. However, a systematic method for optimizing these parameters is still an
active research area. This work focuses on multilayer feed-forward ANNs due to their
generalization capability, simplicity from the viewpoint of structure, and ease of
mathematical analysis. Even though, several rules for the optimization of multilayer
feed-forward ANN parameters are available in the literature, most networks are still
calibrated via a trial-and-error procedure, which depends mainly on the type of
problem, and past experience and intuition of the expert. To overcome these
limitations, there have been attempts to use genetic algorithm (GA) to optimize some
of these parameters. However most, if not all, of the existing approaches are focused
partially on the part of architecture and training parameters. On the contrary, the GAANN
approach presented here has covered most aspects of multilayer feed-forward
ANN in a more comprehensive way. This research focuses on the use of binaryencoded
genetic algorithm (GA) to implement efficient search strategies for the
optimal architecture and training parameters of a multilayer feed-forward ANN.
Particularly, GA is utilized to determine the optimal number of hidden layers, number
of neurons in each hidden layer, type of training algorithm, type of activation function
of hidden and output neurons, initial weight, learning rate, momentum term, and
epoch size of a multilayer feed-forward ANN. In this thesis, the approach has been
analyzed and algorithms that simulate the new approach have been mapped out
- …