3 research outputs found

    Neurocomputing Techniques to Predict the 2D Structures by Using Lattice Dynamics of Surfaces

    No full text
    A theoretical study of artificial neural network modelling, based on vibrational dynamic data for 2D lattice, is proposed in this paper. The main purpose is to establish a neurocomputing model able to predict the 2D structures of crystal surfaces. In material surfaces, atoms can be arranged in different possibilities, defining several 2D configurations, such as triangular, square lattices, etc. To describe these structures, we usually employ the Wood notations, which are considered as the simplest manner and the most frequently used to spot the surfaces in physics. Our contribution consists to use the vibration lattice of perfect 2D structures along with the matrix and Wood notations to build up an input-output set to feed the neural model. The input data are given by the frequency modes over high symmetry points and the group velocity. The output data are given by the basis vectors corresponding to surface reconstruction and the rotation angle which aligns the unit cell of the reconstructed surface. Results showed that the method of collecting the dataset was very suitable for building a neurocomputing model that is able to predict and classify the 2D surface of the crystals. Moreover, the model was able to generate the lattice spacing for a given structure

    Comparison of Second Order Algorithms for Function Approximation with Neural Networks

    No full text
    The Neural networks are massively parallel, distributed processing systems representing a new computational technology built on the analogy to the human information processing system. They are usually considered as naturally parallel computing models. The combination of wavelets with neural networks can hopefully remedy each other's weaknesses, resulting in wavelet based neural network capable of approximating any function with arbitrary precision. A wavelet based neural network is a nonlinear regression structure that represents nonlinear mappings as the superposition of dilated and translated versions of a function, which is found both in the space and frequency domains. The desired task is usually obtained by a learning procedure which consists in adjusting the "synaptic weights". For this purpose, many learning algorithms have been proposed to update these weights. The convergence for these learning algorithms is a crucial criterion for neural networks to be useful in different applications. In this paper, we use different training algorithms for feed forward wavelet networks used for function approximation. The training is based on the minimization of the least-square cost function. The minimization is performed by iterative first and second order gradient-based methods. We make use of the Levenberg-Marquardt algorithm to train the architecture of the chosen network and, then, the training procedure starts with a simple gradient method which is followed by a BFGS (Broyden, Fletcher, Glodfarb et Shanno) algorithm. The conjugate gradient method is then used. The performances of the different algorithms are then compared. It is found that the advantage of the last training algorithm, namely, conjugate gradient method, over many of the other optimization algorithms is its relative simplicity, efficiency and quick convergence

    Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

    No full text
    The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage
    corecore