6,406 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Variational Hamiltonian Monte Carlo via Score Matching

    Full text link
    Traditionally, the field of computational Bayesian statistics has been divided into two main subfields: variational methods and Markov chain Monte Carlo (MCMC). In recent years, however, several methods have been proposed based on combining variational Bayesian inference and MCMC simulation in order to improve their overall accuracy and computational efficiency. This marriage of fast evaluation and flexible approximation provides a promising means of designing scalable Bayesian inference methods. In this paper, we explore the possibility of incorporating variational approximation into a state-of-the-art MCMC method, Hamiltonian Monte Carlo (HMC), to reduce the required gradient computation in the simulation of Hamiltonian flow, which is the bottleneck for many applications of HMC in big data problems. To this end, we use a {\it free-form} approximation induced by a fast and flexible surrogate function based on single-hidden layer feedforward neural networks. The surrogate provides sufficiently accurate approximation while allowing for fast exploration of parameter space, resulting in an efficient approximate inference algorithm. We demonstrate the advantages of our method on both synthetic and real data problems

    Exemplar-Centered Supervised Shallow Parametric Data Embedding

    Full text link
    Metric learning methods for dimensionality reduction in combination with k-Nearest Neighbors (kNN) have been extensively deployed in many classification, data embedding, and information retrieval applications. However, most of these approaches involve pairwise training data comparisons, and thus have quadratic computational complexity with respect to the size of training set, preventing them from scaling to fairly big datasets. Moreover, during testing, comparing test data against all the training data points is also expensive in terms of both computational cost and resources required. Furthermore, previous metrics are either too constrained or too expressive to be well learned. To effectively solve these issues, we present an exemplar-centered supervised shallow parametric data embedding model, using a Maximally Collapsing Metric Learning (MCML) objective. Our strategy learns a shallow high-order parametric embedding function and compares training/test data only with learned or precomputed exemplars, resulting in a cost function with linear computational complexity for both training and testing. We also empirically demonstrate, using several benchmark datasets, that for classification in two-dimensional embedding space, our approach not only gains speedup of kNN by hundreds of times, but also outperforms state-of-the-art supervised embedding approaches.Comment: accepted to IJCAI201

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    A weather forecast model accuracy analysis and ECMWF enhancement proposal by neural network

    Get PDF
    This paper presents a neural network approach for weather forecast improvement. Predicted parameters, such as air temperature or precipitation, play a crucial role not only in the transportation sector but they also influence people's everyday activities. Numerical weather models require real measured data for the correct forecast run. This data is obtained from automatic weather stations by intelligent sensors. Sensor data collection and its processing is a necessity for finding the optimal weather conditions estimation. The European Centre for Medium-Range Weather Forecasts (ECMWF) model serves as the main base for medium-range predictions among the European countries. This model is capable of providing forecast up to 10 days with horizontal resolution of 9 km. Although ECMWF is currently the global weather system with the highest horizontal resolution, this resolution is still two times worse than the one offered by limited area (regional) numeric models (e.g., ALADIN that is used in many European and north African countries). They use global forecasting model and sensor-based weather monitoring network as the input parameters (global atmospheric situation at regional model geographic boundaries, description of atmospheric condition in numerical form), and because the analysed area is much smaller (typically one country), computing power allows them to use even higher resolution for key meteorological parameters prediction. However, the forecast data obtained from regional models are available only for a specific country, and end-users cannot find them all in one place. Furthermore, not all members provide open access to these data. Since the ECMWF model is commercial, several web services offer it free of charge. Additionally, because this model delivers forecast prediction for the whole of Europe (and for the whole world, too), this attitude is more user-friendly and attractive for potential customers. Therefore, the proposed novel hybrid method based on machine learning is capable of increasing ECMWF forecast outputs accuracy to the same level as limited area models provide, and it can deliver a more accurate forecast in real-time.Web of Science1923art. no. 514
    corecore