37 research outputs found

    Speech and neural network dynamics

    Get PDF

    Electrocardiogram pattern recognition and analysis based on artificial neural networks and support vector machines: a review.

    Get PDF
    Computer systems for Electrocardiogram (ECG) analysis support the clinician in tedious tasks (e.g., Holter ECG monitored in Intensive Care Units) or in prompt detection of dangerous events (e.g., ventricular fibrillation). Together with clinical applications (arrhythmia detection and heart rate variability analysis), ECG is currently being investigated in biometrics (human identification), an emerging area receiving increasing attention. Methodologies for clinical applications can have both differences and similarities with respect to biometrics. This paper reviews methods of ECG processing from a pattern recognition perspective. In particular, we focus on features commonly used for heartbeat classification. Considering the vast literature in the field and the limited space of this review, we dedicated a detailed discussion only to a few classifiers (Artificial Neural Networks and Support Vector Machines) because of their popularity; however, other techniques such as Hidden Markov Models and Kalman Filtering will be also mentioned

    Stability and weight smoothing in CMAC neural networks

    Get PDF
    Although the CMAC (Cerebellar Model Articulation Controller) neural network has been successfully used in control systems for many years, its property of local generalization, the availability of trained information for network responses at adjacent untrained locations, although responsible for the networks rapid learning and efficient implementation, results in network responses that is, when trained with sparse or widely spaced training data, spiky in nature even when the underlying function being learned is quite smooth. Since the derivative of such a network response can vary widely, the CMAC\u27s usefulness for solving optimization problems as well as for certain other control system applications can be severely limited. This dissertation presents the CMAC algorithm in sufficient detail to explore its strengths and weaknesses. Its properties of information generalization and storage are discussed and comparisons are made with other neural network algorithms and with other adaptive control algorithms. A synopsis of the development of the fields of neural networks and adaptive control is included to lend historical perspective. A stability analysis of the CMAC algorithm for open-loop function learning is developed. This stability analysis casts the function learning problem as a unique implementation of the model reference structure and develops a Lyapunov function to prove convergence of the CMAC to the target model. A new CMAC learning rule is developed by treating the CMAC as a set of simultaneous equations in a constrained optimization problem and making appropriate choices for the weight penalty matrix in the cost equation. This dissertation then presents a new CMAC learning algorithm which has the property of weight smoothing to improve generalization, function approximation in partially trained networks and the partial derivatives of learned functions. This new learning algorithm is significant in that it derives from an optimum solution and demonstrates a dramatic performance improvement for function learning in the presence of widely spaced training data. Developed from a completely unique analytical direction, this algorithm represents a coupling and extension of single- and multi-resolution CMAC algorithms developed by other researchers. The insights derived from the analysis of the optimum solution and the resulting new learning rules are discussed and suggestions for future work are presented

    Theory and applications of artificial neural networks

    Get PDF
    In this thesis some fundamental theoretical problems about artificial neural networks and their application in communication and control systems are discussed. We consider the convergence properties of the Back-Propagation algorithm which is widely used for training of artificial neural networks, and two stepsize variation techniques are proposed to accelerate convergence. Simulation results demonstrate significant improvement over conventional Back-Propagation algorithms. We also discuss the relationship between generalization performance of artificial neural networks and their structure and representation strategy. It is shown that the structure of the network which represent a priori knowledge of the environment has a strong influence on generalization performance. A Theorem about the number of hidden units and the capacity of self-association MLP (Multi-Layer Perceptron) type network is also given in the thesis. In the application part of the thesis, we discuss the feasibility of using artificial neural networks for nonlinear system identification. Some advantages and disadvantages of this approach are analyzed. The thesis continues with a study of artificial neural networks applied to communication channel equalization and the problem of call access control in broadband ATM (Asynchronous Transfer Mode) communication networks. A final chapter provides overall conclusions and suggestions for further work

    A Predictive Fuzzy-Neural Autopilot for the Guidance of Small Motorised Marine Craft

    Get PDF
    This thesis investigates the design and evaluation of a control system, that is able to adapt quickly to changes in environment and steering characteristics. This type of controller is particularly suited for applications with wide-ranging working conditions such as those experienced by small motorised craft. A small motorised craft is assumed to be highly agile and prone to disturbances, being thrown off-course very easily when travelling at high speed 'but rather heavy and sluggish at low speeds. Unlike large vessels, the steering characteristics of the craft will change tremendously with a change in forward speed. Any new design of autopilot needs to be to compensate for these changes in dynamic characteristics to maintain near optimal levels of performance. This study identities the problems that need to be overcome and the variables involved. A self-organising fuzzy logic controller is developed and tested in simulation. This type of controller learns on-line but has certain performance limitations. The major original contribution of this research investigation is the development of an improved self-adaptive and predictive control concept, the Predictive Self-organising Fuzzy Logic Controller (PSoFLC). The novel feature of the control algorithm is that is uses a neural network as a predictive simulator of the boat's future response and this network is then incorporated into the control loop to improve the course changing, as well as course keeping capabilities of the autopilot investigated. The autopilot is tested in simulation to validate the working principle of the concept and to demonstrate the self-tuning of the control parameters. Further work is required to establish the suitability of the proposed novel concept to other control

    Electrocardiogram Pattern Recognition and Analysis Based on Artificial Neural Networks and Support Vector Machines: A Review

    Full text link

    Power System Stability Analysis using Neural Network

    Full text link
    This work focuses on the design of modern power system controllers for automatic voltage regulators (AVR) and the applications of machine learning (ML) algorithms to correctly classify the stability of the IEEE 14 bus system. The LQG controller performs the best time domain characteristics compared to PID and LQG, while the sensor and amplifier gain is changed in a dynamic passion. After that, the IEEE 14 bus system is modeled, and contingency scenarios are simulated in the System Modelica Dymola environment. Application of the Monte Carlo principle with modified Poissons probability distribution principle is reviewed from the literature that reduces the total contingency from 1000k to 20k. The damping ratio of the contingency is then extracted, pre-processed, and fed to ML algorithms, such as logistic regression, support vector machine, decision trees, random forests, Naive Bayes, and k-nearest neighbor. A neural network (NN) of one, two, three, five, seven, and ten hidden layers with 25%, 50%, 75%, and 100% data size is considered to observe and compare the prediction time, accuracy, precision, and recall value. At lower data size, 25%, in the neural network with two-hidden layers and a single hidden layer, the accuracy becomes 95.70% and 97.38%, respectively. Increasing the hidden layer of NN beyond a second does not increase the overall score and takes a much longer prediction time; thus could be discarded for similar analysis. Moreover, when five, seven, and ten hidden layers are used, the F1 score reduces. However, in practical scenarios, where the data set contains more features and a variety of classes, higher data size is required for NN for proper training. This research will provide more insight into the damping ratio-based system stability prediction with traditional ML algorithms and neural networks.Comment: Masters Thesis Dissertatio

    An instruction systolic array architecture for multiple neural network types

    Get PDF
    Modern electronic systems, especially sensor and imaging systems, are beginning to incorporate their own neural network subsystems. In order for these neural systems to learn in real-time they must be implemented using VLSI technology, with as much of the learning processes incorporated on-chip as is possible. The majority of current VLSI implementations literally implement a series of neural processing cells, which can be connected together in an arbitrary fashion. Many do not perform the entire neural learning process on-chip, instead relying on other external systems to carry out part of the computation requirements of the algorithm. The work presented here utilises two dimensional instruction systolic arrays in an attempt to define a general neural architecture which is closer to the biological basis of neural networks - it is the synapses themselves, rather than the neurons, that have dedicated processing units. A unified architecture is described which can be programmed at the microcode level in order to facilitate the processing of multiple neural network types. An essential part of neural network processing is the neuron activation function, which can range from a sequential algorithm to a discrete mathematical expression. The architecture presented can easily carry out the sequential functions, and introduces a fast method of mathematical approximation for the more complex functions. This can be evaluated on-chip, thus implementing the entire neural process within a single system. VHDL circuit descriptions for the chip have been generated, and the systolic processing algorithms and associated microcode instruction set for three different neural paradigms have been designed. A software simulator of the architecture has been written, giving results for several common applications in the field

    Nonlinear neural networks: Principles, mechanisms, and architectures

    Full text link

    Design of Neural Network Filters

    Get PDF
    Emnet for n rv rende licentiatafhandling er design af neurale netv rks ltre. Filtre baseret pa neurale netv rk kan ses som udvidelser af det klassiske line re adaptive l-ter rettet mod modellering af uline re sammenh nge. Hovedv gten l gges pa en neural netv rks implementering af den ikke-rekursive, uline re adaptive model med additiv st j. Formalet er at klarl gge en r kke faser forbundet med design af neural netv rks arkitekturer med henblik pa at udf re forskellige \black-box " modellerings opgaver sa som: System identi kation, invers modellering og pr diktion af tidsserier. De v senligste bidrag omfatter: Formulering af en neural netv rks baseret kanonisk lter repr sentation, der danner baggrund for udvikling af et arkitektur klassi kationssystem. I hovedsagen drejer det sig om en skelnen mellem globale og lokale modeller. Dette leder til at en r kke kendte neurale netv rks arkitekturer kan klassi ceres, og yderligere abnes der mulighed for udvikling af helt nye strukturer. I denne sammenh ng ndes en gennemgang af en r kke velkendte arkitekturer. I s rdeleshed l gges der v gt pa behandlingen af multi-lags perceptron neural netv rket
    corecore