4,700 research outputs found

    Wavelet Neural Networks: A Practical Guide

    Get PDF
    Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of application. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thorough examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey-Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications

    Multilayered feed forward Artificial Neural Network model to predict the average summer-monsoon rainfall in India

    Full text link
    In the present research, possibility of predicting average summer-monsoon rainfall over India has been analyzed through Artificial Neural Network models. In formulating the Artificial Neural Network based predictive model, three layered networks have been constructed with sigmoid non-linearity. The models under study are different in the number of hidden neurons. After a thorough training and test procedure, neural net with three nodes in the hidden layer is found to be the best predictive model.Comment: 19 pages, 1 table, 3 figure

    ClusterGAN : Latent Space Clustering in Generative Adversarial Networks

    Full text link
    Generative Adversarial networks (GANs) have obtained remarkable success in many unsupervised learning tasks and unarguably, clustering is an important unsupervised learning problem. While one can potentially exploit the latent-space back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space. In this paper, we propose ClusterGAN as a new mechanism for clustering using GANs. By sampling latent variables from a mixture of one-hot encoded variables and continuous latent variables, coupled with an inverse network (which projects the data to the latent space) trained jointly with a clustering specific loss, we are able to achieve clustering in the latent space. Our results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. We compare our results with various clustering baselines and demonstrate superior performance on both synthetic and real datasets.Comment: GANs, Clustering, Latent Space, Interpolation (v2 : Typos fixed, some new experiments added, reported metrics on best validated model.

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    Shallow Water Depth Inversion Based on Data Mining Models

    Get PDF
    This thesis focuses on applying machine-learning algorithms on water depth inversion from remote sensing images, with a case study in Michigan lake area. The goal is to assess the use of the public available Landsat images on shallow water depth inversion. Firstly, ICESAT elevation data were used to determine the absolute water surface elevation. Airborne bathymetry Lidar data provide systematic measure of water bottom elevation. Subtracting water bottom elevation from water surface elevation will result in water depth. Water depth is associated with reflectance recorded as DN value in Landsat images. Water depth inversion was tested on ANN models, SVM models with four different kernel functions and regression tree model that exploit the correlation between water depth and image band ratios. The result showed that the RMSE (root-mean-square error) of all models are smaller than 1.5 meters and the R2 of them are greater than 0.81. The conclusion is Landsat images can be used to measure water depth in shallow area of the lakes. Potentially, water volume change of the Great Lakes can be monitored by using the procedure explored in this research

    Neural Network Based Models for Efficiency Frontier Analysis: An Application to East Asian Economies' Growth Decomposition

    Get PDF
    There has been a long tradition in business and economics to use frontier analysis to assess a production unit’s performance. The first attempt utilized the data envelopment analysis (DEA) which is based on a piecewise linear and mathematical programming approach, whilst the other employed the parametric approach to estimate the stochastic frontier functions. Both approaches have their advantages as well as limitations. This paper sets out to use an alternative approach, i.e. artificial neural networks (ANNs) for measuring efficiency and productivity growth for seven East Asian economies at manufacturing level, for the period 1963 to 1998, and the relevant comparisons are carried out between DEA and ANN, and stochastic frontier analysis (SFA) and ANN in order to test the ANNs’ ability to assess the performance of production units. The results suggest that ANNs are a promising alternative to traditional approaches, to approximate production functions more accurately and measure efficiency and productivity under non-linear contexts, with minimum assumptions.total factor productivity, neural networks, stochastic frontier analysis, DEA, East Asian economies

    Application of Higher-Order Neural Networks to Financial Time-Series Prediction

    Get PDF
    Financial time series data is characterized by non-linearities, discontinuities and high frequency, multi-polynomial components. Not surprisingly, conventional Artificial Neural Networks (ANNs) have difficulty in modelling such complex data. A more appropriate approach is to apply Higher-Order ANNs, which are capable of extracting higher order polynomial coefficients in the data. Moreover, since there is a one-to-one correspondence between network weights and polynomial coefficients, HONNs (unlike ANNs generally) can be considered open-, rather than 'closed box' solutions, and thus hold more appeal to the financial community. After developing Polynomial and Trigonometric HONNs, we introduce the concept of HONN groups. The latter incorporate piecewise continuous activation functions and thresholds, and as a result are capable of modelling discontinuous (piecewise continuous) data, and what's more to any degree of accuracy. Several other PHONN variants are also described. The performance of P(T)HONNs and HONN groups on representative financial time series is described (credit ratings and exchange rates). In short, HONNs offer roughly twice the performance of MLP/BP on financial time series prediction, and HONN groups around 10% further improvement
    corecore