23,232 research outputs found

    Neural networks based recognition of 3D freeform surface from 2D sketch

    Get PDF
    In this paper, the Back Propagation (BP) network and Radial Basis Function (RBF) neural network are employed to recognize and reconstruct 3D freeform surface from 2D freehand sketch. Some tests and comparison experiments have been made to evaluate the performance for the reconstruction of freeform surfaces of both networks using simulation data. The experimental results show that both BP and RBF based freeform surface reconstruction methods are feasible; and the RBF network performed better. The RBF average point error between the reconstructed 3D surface data and the desired 3D surface data is less than 0.05 over all our 75 test sample data

    Surface profile prediction and analysis applied to turning process

    Get PDF
    An approach for the prediction of surface profile in turning process using Radial Basis Function (RBF) neural networks is presented. The input parameters of the RBF networks are cutting speed, depth of cut and feed rate. The output parameters are Fast Fourier Transform (FFT) vector of surface profile for the prediction of surface profile. The RBF networks are trained with adaptive optimal training parameters related to cutting parameters and predict surface profile using the corresponding optimal network topology for each new cutting condition. A very good performance of surface profile prediction, in terms of agreement with experimental data, was achieved with high accuracy, low cost and high speed. It is found that the RBF networks have the advantage over Back Propagation (BP) neural networks. Furthermore, a new group of training and testing data were also used to analyse the influence of tool wear and chip formation on prediction accuracy using RBF neural networks

    An empirical learning-based validation procedure for simulation workflow

    Full text link
    Simulation workflow is a top-level model for the design and control of simulation process. It connects multiple simulation components with time and interaction restrictions to form a complete simulation system. Before the construction and evaluation of the component models, the validation of upper-layer simulation workflow is of the most importance in a simulation system. However, the methods especially for validating simulation workflow is very limit. Many of the existing validation techniques are domain-dependent with cumbersome questionnaire design and expert scoring. Therefore, this paper present an empirical learning-based validation procedure to implement a semi-automated evaluation for simulation workflow. First, representative features of general simulation workflow and their relations with validation indices are proposed. The calculation process of workflow credibility based on Analytic Hierarchy Process (AHP) is then introduced. In order to make full use of the historical data and implement more efficient validation, four learning algorithms, including back propagation neural network (BPNN), extreme learning machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture model (FIGMN), are introduced for constructing the empirical relation between the workflow credibility and its features. A case study on a landing-process simulation workflow is established to test the feasibility of the proposed procedure. The experimental results also provide some useful overview of the state-of-the-art learning algorithms on the credibility evaluation of simulation models

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications

    Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    Full text link
    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.Comment: 28 pages, Published 21 April 2015 at MDPI's journal "Sensors

    Contrastive Hebbian Learning with Random Feedback Weights

    Full text link
    Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning

    A Hybrid Neural Network Framework and Application to Radar Automatic Target Recognition

    Full text link
    Deep neural networks (DNNs) have found applications in diverse signal processing (SP) problems. Most efforts either directly adopt the DNN as a black-box approach to perform certain SP tasks without taking into account of any known properties of the signal models, or insert a pre-defined SP operator into a DNN as an add-on data processing stage. This paper presents a novel hybrid-NN framework in which one or more SP layers are inserted into the DNN architecture in a coherent manner to enhance the network capability and efficiency in feature extraction. These SP layers are properly designed to make good use of the available models and properties of the data. The network training algorithm of hybrid-NN is designed to actively involve the SP layers in the learning goal, by simultaneously optimizing both the weights of the DNN and the unknown tuning parameters of the SP operators. The proposed hybrid-NN is tested on a radar automatic target recognition (ATR) problem. It achieves high validation accuracy of 96\% with 5,000 training images in radar ATR. Compared with ordinary DNN, hybrid-NN can markedly reduce the required amount of training data and improve the learning performance
    • …
    corecore