21,554 research outputs found

    ARTIFICIAL NEURAL NETWORKS PRUNING APPROACH FOR GEODETIC VELOCITY FIELD DETERMINATION

    Get PDF
    There has been a need for geodetic network densification since the early days oftraditional surveying. In order to densify geodetic networks in a way that willproduce the most effective reference frame improvements, the crustal velocity fieldmust be modelled. Artificial Neural Networks (ANNs) are widely used as functionapproximators in diverse fields of geoinformatics including velocity fielddetermination. Deciding the number of hidden neurons required for theimplementation of an arbitrary function is one of the major problems of ANN thatstill deserves further exploration. Generally, the number of hidden neurons isdecided on the basis of experience. This paper attempts to quantify the significanceof pruning away hidden neurons in ANN architecture for velocity fielddetermination. An initial back propagation artificial neural network (BPANN) with30 hidden neurons is educated by training data and resultant BPANN is applied ontest and validation data. The number of hidden neurons is subsequently decreased,in pairs from 30 to 2, to achieve the best predicting model. These pruned BPANNsare retrained and applied on the test and validation data. Some existing methods forselecting the number of hidden neurons are also used. The results are evaluated interms of the root mean square error (RMSE) over a study area for optimizing thenumber of hidden neurons in estimating densification point velocity by BPANN

    Adaptive Regularization in Neural Network Modeling

    Get PDF
    . In this paper we address the important problem of optimizing regularization parameters in neural network modeling. The suggested optimization scheme is an extended version of the recently presented algorithm [24]. The idea is to minimize an empirical estimate -- like the cross-validation estimate -- of the generalization error with respect to regularization parameters. This is done by employing a simple iterative gradient descent scheme using virtually no additional programming overhead compared to standard training. Experiments with feed-forward neural network models for time series prediction and classification tasks showed the viability and robustness of the algorithm. Moreover, we provided some simple theoretical examples in order to illustrate the potential and limitations of the proposed regularization framework. 1 Introduction Neural networks are flexible tools for time series processing and pattern recognition. By increasing the number of hidden neurons in a 2-layer architec..

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore