27,876 research outputs found

    Lattice dynamical wavelet neural networks implemented using particle swarm optimization for spatio-temporal system identification

    No full text
    In this brief, by combining an efficient wavelet representation with a coupled map lattice model, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNNs), is introduced for spatio-temporal system identification. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimization (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the OPP algorithm, significant wavelet neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated wavelet neurons are optimized using a particle swarm optimizer. The resultant network model, obtained in the first stage, however, may be redundant. In the second stage, an orthogonal least squares algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet neurons from the network. An example for a real spatio-temporal system identification problem is presented to demonstrate the performance of the proposed new modeling framework

    Dreaming of atmospheres

    Get PDF
    Here we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrievals of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep belief neural (DBN) networks trained to accurately recognise molecular signatures for a wide range of planets, atmospheric thermal profiles and compositions. Reconstructions of the learned features, also referred to as `dreams' of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work towards retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.Comment: ApJ accepte

    Artificial neural network optimisation of shielding gas flow rate in gas metal arc welding subjected to cross drafts when using alternating shielding gases

    Get PDF
    This study implemented an iterative experimental approach in order to determine the shielding gas flow required to produce high quality welds in the gas metal arc welding (GMAW) process with alternating shielding gases when subjected to varying velocities of cross drafts. Thus determining the transitional zone where the weld quality deteriorates as a function of cross draft velocity. An Artificial Neural Network (ANN) was developed using the experimental data that would predict the weld quality based primarily on shielding gas composition, alternating frequency and flowrate, and cross draft velocity, but also incorporated other important input parameters including voltage and current. A series of weld trials were conducted validate and test the robustness of the model generated. It was found that the alternating shielding gas process does not provide the same level of resistance to the adverse effects of cross drafts as a conventional argon/carbon dioxide mixture. The use of such a prediction tool is of benefit to industry in that it allows the adoption of a more efficient shielding gas flow rate, whilst removing the uncertainty of the resultant weld quality

    Data-free parameter pruning for Deep Neural Networks

    Full text link
    Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85\% of the total parameters in an MNIST-trained network, and about 35\% for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network.Comment: BMVC 201
    • …
    corecore