185 research outputs found

    Supervised learning in Spiking Neural Networks with Limited Precision: SNN/LP

    Full text link
    A new supervised learning algorithm, SNN/LP, is proposed for Spiking Neural Networks. This novel algorithm uses limited precision for both synaptic weights and synaptic delays; 3 bits in each case. Also a genetic algorithm is used for the supervised training. The results are comparable or better than previously published work. The results are applicable to the realization of large scale hardware neural networks. One of the trained networks is implemented in programmable hardware.Comment: 7 pages, originally submitted to IJCNN 201

    The Improvement of Neural Network Cascade-Correlation Algorithm and Its Application in Picking Seismic First Break

    Get PDF
    Neural Network is a kind of widely used seismic wave travel time auto-picking method. Most commercial software such as Promax often uses Back Propagation (BP) neural network. Here we introduce a cascade-correlation algorithm for constructing neural network. The algorithm’s convergence is faster than BP algorithm and can determine its own network architecture according to training samples, in addition, it can be able to expand network topology to learn new samples. The cascaded-correlation algorithm is improved. Different from the standard cascade-correlation algorithm, improved algorithm starts at an appropriate BP network architecture (exits hidden units), but the standard one’s initial network only includes input layer and output layer. In addition, in order to prevent weight-illgrowth, adding regularization term to the objective function when training candidate hidden units can decay weights. The simulation experiment demonstrates that improved cascade-correlation algorithm is faster convergence speed and stronger generalization ability. Analytically study five attributes, including instantaneous intensity ratio, amplitude, frequency, curve length ratio, adjacent seismic channel correlation. Intersection figure shows that these five attributes have distinctiveness of first break and stability. The neural network first break picking method of this paper has achieved good effect in testing actual seismic data.Key words: Neural network; Cascade-correlation algorithm; Picking seismic first brea

    XFSL: A tool for supervised learning of fuzzy systems

    Get PDF
    This paper presents Xfsl, a tool for the automatic tuning of fuzzy systems using supervised learning algorithms. The tool provides a wide set of learning algorithms, which can be used to tune complex systems. An important issue is that Xfsl is integrated into the fuzzy system development environment Xfuzzy 3.0, and hence, it can be easily employed within the design flow of a fuzzy system.Comisión Interministerial de Ciencia y Tecnología TIC98-0869Fondo Europeo de Desarrollo Regional 1FD97-0956-C3-0

    Automated Propulsion Data Screening demonstration system

    Get PDF
    A fully-instrumented firing of a propulsion system typically generates a very large quantity of data. In the case of the Space Shuttle Main Engine (SSME), data analysis from ground tests and flights is currently a labor-intensive process. Human experts spend a great deal of time examining the large volume of sensor data generated by each engine firing. These experts look for any anomalies in the data which might indicate engine conditions warranting further investigation. The contract effort was to develop a 'first-cut' screening system for application to SSME engine firings that would identify the relatively small volume of data which is unusual or anomalous in some way. With such a system, limited and expensive human resources could focus on this small volume of unusual data for thorough analysis. The overall project objective was to develop a fully operational Automated Propulsion Data Screening (APDS) system with the capability of detecting significant trends and anomalies in transient and steady-state data. However, the effort limited screening of transient data to ground test data for throttle-down cases typical of the 3-g acceleration, and for engine throttling required to reach the maximum dynamic pressure limits imposed on the Space Shuttle. This APDS is based on neural networks designed to detect anomalies in propulsion system data that are not part of the data used for neural network training. The delivered system allows engineers to build their own screening sets for application to completed or planned firings of the SSME. ERC developers also built some generic screening sets that NASA engineers could apply immediately to their data analysis efforts

    Soft margin estimation for automatic speech recognition

    Get PDF
    In this study, a new discriminative learning framework, called soft margin estimation (SME), is proposed for estimating the parameters of continuous density hidden Markov models (HMMs). The proposed method makes direct use of the successful ideas of margin in support vector machines to improve generalization capability and decision feedback learning in discriminative training to enhance model separation in classifier design. SME directly maximizes the separation of competing models to enhance the testing samples to approach a correct decision if the deviation from training samples is within a safe margin. Frame and utterance selections are integrated into a unified framework to select the training utterances and frames critical for discriminating competing models. SME offers a flexible and rigorous framework to facilitate the incorporation of new margin-based optimization criteria into HMMs training. The choice of various loss functions is illustrated and different kinds of separation measures are defined under a unified SME framework. SME is also shown to be able to jointly optimize feature extraction and HMMs. Both the generalized probabilistic descent algorithm and the Extended Baum-Welch algorithm are applied to solve SME. SME has demonstrated its great advantage over other discriminative training methods in several speech recognition tasks. Tested on the TIDIGITS digit recognition task, the proposed SME approach achieves a string accuracy of 99.61%, the best result ever reported in literature. On the 5k-word Wall Street Journal task, SME reduced the word error rate (WER) from 5.06% of MLE models to 3.81%, with relative 25% WER reduction. This is the first attempt to show the effectiveness of margin-based acoustic modeling for large vocabulary continuous speech recognition in a HMMs framework. The generalization of SME was also well demonstrated on the Aurora 2 robust speech recognition task, with around 30% relative WER reduction from the clean-trained baseline.Ph.D.Committee Chair: Dr. Chin-Hui Lee; Committee Member: Dr. Anthony Joseph Yezzi; Committee Member: Dr. Biing-Hwang (Fred) Juang; Committee Member: Dr. Mark Clements; Committee Member: Dr. Ming Yua

    Decision Support System for Fertilizer Use

    Get PDF
    corecore