1,350 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Deep Learning: Our Miraculous Year 1990-1991

    Full text link
    In 2020, we will celebrate that many of the basic ideas behind the deep learning revolution were published three decades ago within fewer than 12 months in our "Annus Mirabilis" or "Miraculous Year" 1990-1991 at TU Munich. Back then, few people were interested, but a quarter century later, neural networks based on these ideas were on over 3 billion devices such as smartphones, and used many billions of times per day, consuming a significant fraction of the world's compute.Comment: 37 pages, 188 references, based on work of 4 Oct 201

    Automated Feature Engineering for Deep Neural Networks with Genetic Programming

    Get PDF
    Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model’s predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm’s engineered features

    Feedforward backpropagation, genetic algorithm approaches for predicting reference evapotranspiration

    Get PDF
    Water scarcity is a global concern, as the demand for water is increasing tremendously and poor management of water resources will accelerates dramatically the depletion of available water. The precise prediction of evapotranspiration (ET), that consumes almost 100% of the supplied irrigation water, is one of the goals that should be adopted in order to avoid more squandering of water especially in arid and semiarid regions. The capabilities of feedforward backpropagation neural networks (FFBP) in predicting reference evapotranspiration (ET0) are evaluated in this paper in comparison with the empirical FAO Penman-Monteith (P-M) equation, later a model of FFBP+Genetic Algorithm (GA) is implemented for the same evaluation purpose. The study location is the main station in Iraq, namely Baghdad Station. Records of weather variables from the related meteorological station, including monthly mean records of maximum air temperature (Tmax), minimum air temperature (Tmin), sunshine hours (Rn), relative humidity (Rh) and wind speed (U2), from the related meteorological station are used in the prediction of ET0 values. The performance of both simulation models were evaluated using statistical coefficients such as the root of mean squared error (RMSE), mean absolute error (MAE) and coefficient of determination (R2). The results of both models are promising, however the hybrid model shows higher efficiency in predicting ET0 and could be recommended for modeling of ET0 in arid and semiarid regions

    Review of Nature-Inspired Forecast Combination Techniques

    Get PDF
    Effective and efficient planning in various areas can be significantly supported by forecasting a variable like an economy growth rate or product demand numbers for a future point in time. More than one forecast for the same variable is often available, leading to the question whether one should choose one of the single models or combine several of them to obtain a forecast with improved accuracy. In the almost 40 years of research in the area of forecast combination, an impressive amount of work has been done. This paper reviews forecast combination techniques that are nonlinear and have in some way been inspired by nature
    • 

    corecore