154 research outputs found

    The Power of Linear Recurrent Neural Networks

    Full text link
    Recurrent neural networks are a powerful means to cope with time series. We show how a type of linearly activated recurrent neural networks, which we call predictive neural networks, can approximate any time-dependent function f(t) given by a number of function values. The approximation can effectively be learned by simply solving a linear equation system; no backpropagation or similar methods are needed. Furthermore, the network size can be reduced by taking only most relevant components. Thus, in contrast to others, our approach not only learns network weights but also the network architecture. The networks have interesting properties: They end up in ellipse trajectories in the long run and allow the prediction of further values and compact representations of functions. We demonstrate this by several experiments, among them multiple superimposed oscillators (MSO), robotic soccer, and predicting stock prices. Predictive neural networks outperform the previous state-of-the-art for the MSO task with a minimal number of units.Comment: 22 pages, 14 figures and tables, revised implementatio

    Phoneme recognition with statistical modeling of the prediction error of neural networks

    Get PDF
    This paper presents a speech recognition system which incorporates predictive neural networks. The neural networks are used to predict observation vectors of speech. The prediction error vectors are modeled on the state level by Gaussian densities, which provide the local similarity measure for the Viterbi algorithm during recognition. The system is evaluated on a continuous speech phoneme recognition task. Compared with a HMM reference system, the proposed system obtained better results in the speech recognition experiments.Peer ReviewedPostprint (published version

    Comparative assessment of methods for forecasting river runoff with different conditions of organization

    Get PDF
    The article presents the results of comparative analysis of application of traditional statistical methods and non linear multilayer neural networks for processing multi year data of the river runoff with the aim of its forecasting. The obtained in the work optimal parameters for learning predictive neural networks are recommended ed for learning transformation and forecast of river runoff in the situation of permenant anthropogenic influence on basinyesБелгородский государственный университе

    Adaptive motor control using predictive neural networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1995.Includes bibliographical references (p. 97-101).by Fun, Wey.Ph.D

    Predictive neural networks applied to phoneme recognition

    Get PDF
    In this paper a phoneme recognition system based on predictive neural networks is proposed. Neural networks are used to predict observation vectors of speech frames. The obtained prediction error is used for phoneme recognition as 1) distortion measure on the frame level and 2) as feature, which is statistically modeled by the Rayleigh distribution. Continuous speech phoneme recognition experiments are performed different settings of the system are evaluated.Peer ReviewedPostprint (published version

    SVMs for Automatic Speech Recognition: a Survey

    Get PDF
    Hidden Markov Models (HMMs) are, undoubtedly, the most employed core technique for Automatic Speech Recognition (ASR). Nevertheless, we are still far from achieving high-performance ASR systems. Some alternative approaches, most of them based on Artificial Neural Networks (ANNs), were proposed during the late eighties and early nineties. Some of them tackled the ASR problem using predictive ANNs, while others proposed hybrid HMM/ANN systems. However, despite some achievements, nowadays, the preponderance of Markov Models is a fact. During the last decade, however, a new tool appeared in the field of machine learning that has proved to be able to cope with hard classification problems in several fields of application: the Support Vector Machines (SVMs). The SVMs are effective discriminative classifiers with several outstanding characteristics, namely: their solution is that with maximum margin; they are capable to deal with samples of a very higher dimensionality; and their convergence to the minimum of the associated cost function is guaranteed. These characteristics have made SVMs very popular and successful. In this chapter we discuss their strengths and weakness in the ASR context and make a review of the current state-of-the-art techniques. We organize the contributions in two parts: isolated-word recognition and continuous speech recognition. Within the first part we review several techniques to produce the fixed-dimension vectors needed for original SVMs. Afterwards we explore more sophisticated techniques based on the use of kernels capable to deal with sequences of different length. Among them is the DTAK kernel, simple and effective, which rescues an old technique of speech recognition: Dynamic Time Warping (DTW). Within the second part, we describe some recent approaches to tackle more complex tasks like connected digit recognition or continuous speech recognition using SVMs. Finally we draw some conclusions and outline several ongoing lines of research
    corecore