3,805 research outputs found

    Determination of baseflow quantity by using unmanned aerial vehicle (UAV) and Google Earth

    Get PDF
    Baseflow is most important in low-flow hydrological features [1]. It is a function of a large number of variables that include factors such as topography, geology, soil, vegetation, and climate. In many catchments, base flow is an important component of streamflow and, therefore, base flow separations have been widely studied and have a long history in science. Baseflow separation methods can be divided into two main groups: non-tracer-based and tracer- based separation methods of hydrology. Besides, the base flow is determined by fitting a unit hydrograph model with information from the recession limbs of the hydrograph and extrapolating it backward

    Optimal Parameter Selection Using Three-term Back Propagation Algorithm for Data Classification

    Get PDF
    The back propagation (BP) algorithm is the most popular supervised learning method for multi-layered feed forward Neural Network. It has been successfully deployed in numerous practical problems and disciplines. Regardless of its popularity, BP is still known for some major drawbacks such as easily getting stuck in local minima and slow convergence; since, it uses Gradient Descent (GD) method to learn the network. Over the years, many improved modifications of the BP learning algorithm have been made by researchers but the local minima problem remains unresolved. Therefore, to resolve the inherent problems of BP algorithm, this paper proposed BPGD-A3T algorithm where the approach introduces three adaptive parameters which are gain, momentum and learning rate in BP. The performance of the proposed BPGD-A3T algorithm is then compared with BPGD two term parameters (BPGD-2T), BP with adaptive gain (BPGD-AG) and conventional BP algorithm (BPGD) by means of simulations on classification datasets. The simulation results show that the proposed BPGD-A3T shows better performance and performed highest accuracy for all dataset as compared to other

    An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate

    Get PDF
    Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorith

    The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems

    Get PDF
    The back propagation algorithm has been successfully applied to wide range of practical problems. Since this algorithm uses a gradient descent method, it has some limitations which are slow learning convergence velocity and easy convergence to local minima. The convergence behaviour of the back propagation algorithm depends on the choice of initial weights and biases, network topology, learning rate, momentum, activation function and value for the gain in the activation function. Previous researchers demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the current working back propagation algorithm which is Gradien Descent Method with Adaptive Gain by changing the momentum coefficient adaptively for each node. The influence of the adaptive momentum together with adaptive gain on the learning ability of a neural network is analysed. Multilayer feed forward neural networks have been assessed. Physical interpretation of the relationship between the momentum value, the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and current Gradient Descent Method with Adaptive Gain was verified by means of simulation on three benchmark problems. In learning the patterns, the simulations result demonstrate that the proposed algorithm converged faster on Wisconsin breast cancer with an improvement ratio of nearly 1.8, 6.6 on Mushroom problem and 36% better on  Soybean data sets. The results clearly show that the proposed algorithm significantly improves the learning speed of the current gradient descent back-propagatin algorithm

    The effect of adaptive parameters on the performance of back propagation

    Get PDF
    The Back Propagation algorithm or its variation on Multilayered Feedforward Networks is widely used in many applications. However, this algorithm is well-known to have difficulties with local minima problem particularly caused by neuron saturation in the hidden layer. Most existing approaches modify the learning model in order to add a random factor to the model, which overcomes the tendency to sink into local minima. However, the random perturbations of the search direction and various kinds of stochastic adjustment to the current set of weights are not effective in enabling a network to escape from local minima which cause the network fail to converge to a global minimum within a reasonable number of iterations. Thus, this research proposed a new method known as Back Propagation Gradient Descent with Adaptive Gain, Adaptive Momentum and Adaptive Learning Rate (BPGD-AGAMAL) which modifies the existing Back Propagation Gradient Descent algorithm by adaptively changing the gain, momentum coefficient and learning rate. In this method, each training pattern has its own activation functions of neurons in the hidden layer. The activation functions are adjusted by the adaptation of gain parameters together with adaptive momentum and learning rate value during the learning process. The efficiency of the proposed algorithm is compared with conventional Back Propagation Gradient Descent and Back Propagation Gradient Descent with Adaptive Gain by means of simulation on six benchmark problems namely breast cancer, card, glass, iris, soybean, and thyroid. The results show that the proposed algorithm extensively improves the learning process of conventional Back Propagation algorithm

    Best Architecture Recommendations of ANN Backpropagation Based on Combination of Learning Rate, Momentum, and Number of Hidden Layers

    Get PDF
    This article discusses the results of research on the combination of learning rate values, momentum, and the number of neurons in the hidden layer of the ANN Backpropagation (ANN-BP) architecture using meta-analysis. This study aims to find out the most recommended values at each learning rate and momentum interval, namely [0.1], as well as the number of neurons in the hidden layer used during the data training process. We conducted a meta-analysis of the use of learning rate, momentum, and number of neurons in the hidden layer of ANN-BP. The eligibility data criteria of 63 data include a learning rate of 44 complete data, the momentum of 30 complete data, and the number of neurons in the hidden layer of 45 complete data. The results of the data analysis showed that the learning rate value was recommended at intervals of 0.1-0.2 with a RE model value of 0.938 (very high), the momentum at intervals of 0.7-0.9 with RE model values of 0.925 (very high), and the number of neurons in the input layer that was smaller than the number of neurons in the hidden layer with a RE model value of 0.932 (very high). This recommendation is obtained from the results of data analysis using JASP by looking at the effect size of the accuracy level of research sample data

    Improved cuckoo search based neural network learning algorithms for data classification

    Get PDF
    Artificial Neural Networks (ANN) techniques, mostly Back-Propagation Neural Network (BPNN) algorithm has been used as a tool for recognizing a mapping function among a known set of input and output examples. These networks can be trained with gradient descent back propagation. The algorithm is not definite in finding the global minimum of the error function since gradient descent may get stuck in local minima, where it may stay indefinitely. Among the conventional methods, some researchers prefer Levenberg-Marquardt (LM) because of its convergence speed and performance. On the other hand, LM algorithms which are derivative based algorithms still face a risk of getting stuck in local minima. Recently, a novel meta-heuristic search technique called cuckoo search (CS) has gained a great deal of attention from researchers due to its efficient convergence towards optimal solution. But Cuckoo search is prone to less optimal solution during exploration and exploitation process due to large step lengths taken by CS due to Levy flight. It can also be used to improve the balance between exploration and exploitation of CS algorithm, and to increase the chances of the egg’s survival. This research proposed an improved CS called hybrid Accelerated Cuckoo Particle Swarm Optimization algorithm (HACPSO) with Accelerated particle Swarm Optimization (APSO) algorithm. In the proposed HACPSO algorithm, initially accelerated particle swarm optimization (APSO) algorithm searches within the search space and finds the best sub-search space, and then the CS selects the best nest by traversing the sub-search space. This exploration and exploitation method followed in the proposed HACPSO algorithm makes it to converge to global optima with more efficiency than the original Cuckoo Search (CS) algorithm. Finally, the proposed CS hybrid variants such as; HACPSO, HACPSO-BP, HACPSO-LM, CSBP, CSLM, CSERN, and CSLMERN are evaluated and compared with conventional Back propagation Neural Network (BPNN), Artificial Bee Colony Neural Network (ABCNN), Artificial Bee Colony Back propagation algorithm (ABC-BP), and Artificial Bee Colony Levenberg-Marquardt algorithm (ABC-LM). Specifically, 6 benchmark classification datasets are used for training the hybrid Artificial Neural Network algorithms. Overall from the simulation results, it is realized that the proposed CS based NN algorithms performs better than all other proposed and conventional models in terms of CPU Time, MSE, SD and accuracy

    Potential of support-vector regression for forecasting stream flow

    Get PDF
    Vodotok je važan za hidrološko proučavanje zato što određuje varijabilnost vode i magnitudu rijeke. Inženjerstvo vodnih resursa uvijek se bavi povijesnim podacima i pokušava procijeniti prognostičke podatke kako bi se osiguralo bolje predviđanje za primjenu kod bilo kojeg vodnog resursa, na pr. projektiranja vodnog potencijala brane hidroelektrana, procjene niskog protoka, i održavanja zalihe vode. U radu se predstavljaju tri računalna programa za primjenu kod rješavanja ovakvih sadržaja, tj. umjetne neuronske mreže - artificial neural networks (ANNs), prilagodljivi sustavi neuro-neizrazitog zaključivanja - adaptive-neuro-fuzzy inference systems (ANFISs), i support vector machines (SVMs). Za stvaranje procjene korištena je Rijeka Telom, smještena u Cameron Highlands distriktu Pahanga, Malaysia. Podaci o dnevnom prosječnom protoku rijeke Telom, kao što su količina padavina i podaci o vodostaju, koristili su se za period od ožujka 1984. do siječnja 2013. za podučavanje, ispitivanje i ocjenjivanje izabranih modela. SVM pristup je dao bolje rezultate nego ANFIS i ANNs kod procjenjivanja dnevne prosječne fluktuacije vodotoka.Stream flow is an important input for hydrology studies because it determines the water variability and magnitude of a river. Water resources engineering always deals with historical data and tries to estimate the forecasting records in order to give a better prediction for any water resources applications, such as designing the water potential of hydroelectric dams, estimating low flow, and maintaining the water supply. This paper presents three soft-computing approaches for dealing with these issues, i.e. artificial neural networks (ANNs), adaptive-neuro-fuzzy inference systems (ANFISs), and support vector machines (SVMs). Telom River, located in the Cameron Highlands district of Pahang, Malaysia, was used in making the estimation. The Telom River’s daily mean discharge records, such as rainfall and river-level data, were used for the period of March 1984 – January 2013 for training, testing, and validating the selected models. The SVM approach provided better results than ANFIS and ANNs in estimating the daily mean fluctuation of the stream’s flow
    corecore