100 research outputs found

    Stability and dissipativity analysis of static neural networks with time delay

    Get PDF
    This paper is concerned with the problems of stability and dissipativity analysis for static neural networks (NNs) with time delay. Some improved delay-dependent stability criteria are established for static NNs with time-varying or time-invariant delay using the delay partitioning technique. Based on these criteria, several delay-dependent sufficient conditions are given to guarantee the dissipativity of static NNs with time delay. All the given results in this paper are not only dependent upon the time delay but also upon the number of delay partitions. Some examples are given to illustrate the effectiveness and reduced conservatism of the proposed results.published_or_final_versio

    Obstacle avoidance for wheeled mobile robotic systems

    Get PDF

    Development of neural units with higher-order synaptic operations and their applications to logic circuits and control problems

    Get PDF
    Neural networks play an important role in the execution of goal-oriented paradigms. They offer flexibility, adaptability and versatility, so that a variety of approaches may be used to meet a specific goal, depending upon the circumstances and the requirements of the design specifications. Development of higher-order neural units with higher-order synaptic operations will open a new window for some complex problems such as control of aerospace vehicles, pattern recognition, and image processing. The neural models described in this thesis consider the behavior of a single neuron as the basic computing unit in neural information processing operations. Each computing unit in the network is based on the concept of an idealized neuron in the central nervous system (CNS). Most recent mathematical models and their architectures for neuro-control systems have generated many theoretical and industrial interests. Recent advances in static and dynamic neural networks have created a profound impact in the field of neuro-control. Neural networks consisting of several layers of neurons, with linear synaptic operation, have been extensively used in different applications such as pattern recognition, system identification and control of complex systems such as flexible structures, and intelligent robotic systems. The conventional linear neural models are highly simplified models of the biological neuron. Using this model, many neural morphologies, usually referred to as multilayer feedforward neural networks (MFNNs), have been reported in the literature. The performance of the neurons is greatly affected when a layer of neurons are implemented for system identification, pattern recognition and control problems. Through simulation studies of the XOR logic it was concluded that the neurons with linear synaptic operation are limited to only linearly separable forms of pattern distribution. However, they perform a variety of complex mathematical operations when they are implemented in the form of a network structure. These networks suffer from various limitations such as computational efficiency and learning capabilities and moreover, these models ignore many salient features of the biological neurons such as time delays, cross and self correlations, and feedback paths which are otherwise very important in the neural activity. In this thesis an effort is made to develop new mathematical models of neurons that belong to the class of higher-order neural units (HONUs) with higher-order synaptic operations such as quadratic and cubic synaptic operations. The advantage of using this type of neural unit is associated with performance of the neurons but the performance comes at the cost of exponential increase in parameters that hinders the speed of the training process. In this context, a novel method of representation of weight parameters without sacrificing the neural performance has been introduced. A generalised representation of the higher-order synaptic operation for these neural structures was proposed. It was shown that many existing neural structures can be derived from this generalized representation of the higher-order synaptic operation. In the late 1960’s, McCulloch and Pitts modeled the stimulation-response of the primitive neuron using the threshold logic. Since then, it has become a practice to implement the logic circuits using neural structures. In this research, realization of the logic circuits such as OR, AND, and XOR were implemented using the proposed neural structures. These neural structures were also implemented as neuro-controllers for the control problems such as satellite attitude control and model reference adaptive control. A comparative study of the performance of these neural structures compared to that of the conventional linear controllers has been presented. The simulation results obtained in this research were applicable only for the simplified model presented in the simulation studies

    PROPOSED METHODOLOGY FOR OPTIMIZING THE TRAINING PARAMETERS OF A MULTILAYER FEED-FORWARD ARTIFICIAL NEURAL NETWORKS USING A GENETIC ALGORITHM

    Get PDF
    An artificial neural network (ANN), or shortly "neural network" (NN), is a powerful mathematical or computational model that is inspired by the structure and/or functional characteristics of biological neural networks. Despite the fact that ANN has been developing rapidly for many years, there are still some challenges concerning the development of an ANN model that performs effectively for the problem at hand. ANN can be categorized into three main types: single layer, recurrent network and multilayer feed-forward network. In multilayer feed-forward ANN, the actual performance is highly dependent on the selection of architecture and training parameters. However, a systematic method for optimizing these parameters is still an active research area. This work focuses on multilayer feed-forward ANNs due to their generalization capability, simplicity from the viewpoint of structure, and ease of mathematical analysis. Even though, several rules for the optimization of multilayer feed-forward ANN parameters are available in the literature, most networks are still calibrated via a trial-and-error procedure, which depends mainly on the type of problem, and past experience and intuition of the expert. To overcome these limitations, there have been attempts to use genetic algorithm (GA) to optimize some of these parameters. However most, if not all, of the existing approaches are focused partially on the part of architecture and training parameters. On the contrary, the GAANN approach presented here has covered most aspects of multilayer feed-forward ANN in a more comprehensive way. This research focuses on the use of binaryencoded genetic algorithm (GA) to implement efficient search strategies for the optimal architecture and training parameters of a multilayer feed-forward ANN. Particularly, GA is utilized to determine the optimal number of hidden layers, number of neurons in each hidden layer, type of training algorithm, type of activation function of hidden and output neurons, initial weight, learning rate, momentum term, and epoch size of a multilayer feed-forward ANN. In this thesis, the approach has been analyzed and algorithms that simulate the new approach have been mapped out

    Tracking Control Based on Recurrent Neural Networks for Nonlinear Systems with Multiple Inputs and Unknown Deadzone

    Get PDF
    This paper deals with the problem of trajectory tracking for a broad class of uncertain nonlinear systems with multiple inputs each one subject to an unknown symmetric deadzone. On the basis of a model of the deadzone as a combination of a linear term and a disturbance-like term, a continuous-time recurrent neural network is directly employed in order to identify the uncertain dynamics. By using a Lyapunov analysis, the exponential convergence of the identification error to a bounded zone is demonstrated. Subsequently, by a proper control law, the state of the neural network is compelled to follow a bounded reference trajectory. This control law is designed in such a way that the singularity problem is conveniently avoided and the exponential convergence to a bounded zone of the difference between the state of the neural identifier and the reference trajectory can be proven. Thus, the exponential convergence of the tracking error to a bounded zone and the boundedness of all closed-loop signals can be guaranteed. One of the main advantages of the proposed strategy is that the controller can work satisfactorily without any specific knowledge of an upper bound for the unmodeled dynamics and/or the disturbance term

    NASA SBIR abstracts of 1991 phase 1 projects

    Get PDF
    The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included

    Eddy current defect response analysis using sum of Gaussian methods

    Get PDF
    This dissertation is a study of methods to automatedly detect and produce approximations of eddy current differential coil defect signatures in terms of a summed collection of Gaussian functions (SoG). Datasets consisting of varying material, defect size, inspection frequency, and coil diameter were investigated. Dimensionally reduced representations of the defect responses were obtained utilizing common existing reduction methods and novel enhancements to them utilizing SoG Representations. Efficacy of the SoG enhanced representations were studied utilizing common Machine Learning (ML) interpretable classifier designs with the SoG representations indicating significant improvement of common analysis metrics

    Analysis and synthesis techniques of nonlinear dynamical systems with applications to diagnostic of controlled thermonuclear fusion reactors

    Get PDF
    Nonlinear dynamical systems are of wide interest to engineers, physicists and mathematicians, and this is due to the fact that most of physical systems in nature are inherently non-linear. The nonlinearity of these systems has consequences on their time-evolution, which in some cases can be completely unpredictable, apparently random, although fundamentally deterministic. Chaotic systems are striking examples of this. In most cases, there are no hard and fast rules to analyse these systems. Often, their solutions cannot be obtained in closed form, and it is necessary to resort to numerical integration techniques, which, in case of high sensitivity to initial conditions, lead to ill-conditioning problems and high computational costs. The dynamical system theory, the branch of mathematics used to describe the behaviour of these systems, focuses not on finding exact solutions to the equations describing the dynamical system, but rather on knowing if the system stabilises to a steady state in the long term, and what are the possible attractors, e.g. a quasi-periodic or chaotic attractors. Regarding the synthesis, from both a practical and a theoretical standpoint, it is very desirable to develop methods of synthesizing these systems. Although extensive theory has been developed for linear systems, no complete formulation for nonlinear systems synthesis is present today. The main topic of this thesis is the solution of engineering problems related to the analysis and synthesis of nonlinear and chaotic systems. In particular, a new algorithm which optimizes Lyapunov exponents estimation in piecewise linear systems has been applied to PWL and polynomial chaotic systems. In the field of complex systems synthesis, a systematic method to project systems of order 2n characterized by two positive Lyapunov exponents, has been proposed. This procedure couples nth-order chaotic systems with a suitable nonlinear coupling function. Furthermore, a method for the fault detection has been developed. In the field of time series analysis, a new denoising method, based on the wavelet transform of the noisy signal, has been described. The method implements a variable thresholding, whose optimal value is determined by analysing the cross-correlation between the denoised signal and the residuals and by applying different criteria depending on the particular decomposition level. Finally, a study of dynamical behaviour of Type I ELMs has been performed for a future modelization of the phenomenon. In this context, a statistical analysis of time intervals between successive Type I ELMs has been proposed.---------------------------------- Il tema principale di questa tesi è la soluzione di problemi ingegneristici legati all’analisi e alla sintesi di sistemi dinamici non lineari. I sistemi dinamici non lineari sono di largo interesse per ingegneri, fisici e matematici, e questo è dovuto al fatto che la maggior parte dei sistemi fisici in natura è intrinsecamente non lineare. La non linearità di questi sistemi ha conseguenze sulla loro evoluzione temporale, che in certi casi può rivelarsi del tutto imprevedibile, apparentemente casuale, seppure fondamentalmente deterministica. I sistemi caotici sono un esempio lampante di questo comportamento. Nella maggior parte dei casi non esistono delle regole standard per l’analisi di questi sistemi. Spesso, le soluzioni non possono essere ottenute in forma chiusa, ed è necessario ricorrere a tecniche di integrazione numerica, che, in caso di elevata sensibilità alle condizioni iniziali, portano a problemi di mal condizionamento e di elevato costo computazionale. La teoria dei sistemi dinamici, la branca della matematica usata per descrivere il comportamento di questi sistemi, non si concentra sulla ricerca di soluzioni esatte per le equazioni che descrivono il sistema dinamico, ma piuttosto sull’analisi del comportamento a lungo termine del sistema, per sapere se questo si stabilizzi in uno stato stabile e per sapere quali siano i possibili attrattori, ad esempio, attrattori quasi-periodici o caotici. Per quanto riguarda la sintesi, sia da un punto di vista pratico che teorico, è molto importante lo sviluppo di metodi in grado di sintetizzare questi sistemi. Sebbene per i sistemi lineari sia stata sviluppata una teoria ampia e esaustiva, al momento non esiste alcuna formulazione completa per la sintesi di sistemi non lineari. In questa tesi saranno affrontati problemi di caratterizzazione, analisi e sintesi, legati allo studio di sistemi non lineari e caotici. La caratterizzazione dinamica di un sistema non lineare permette di individuarne il comportamento qualitativo a lungo termine. Gli esponenti di Lyapunov sono degli strumenti che permettono di determinare il comportamento asintotico di un sistema dinamico. Essi danno informazioni circa il tasso di divergenza di traiettorie vicine, caratteristica chiave delle dinamiche caotiche. Le tecniche esistenti per il calcolo degli esponenti di Lyapunov sono computazionalmente costose, e questo fatto ha in qualche modo precluso l’uso estensivo di questi strumenti in problemi di grandi dimensioni. Inoltre, durante il calcolo degli esponenti sorgono dei problemi di tipo numerico, per ciò il calcolo deve essere affrontato con cautela. L’implementazione di algoritmi veloci e accurati per il calcolo degli esponenti di Lyapunov è un problema di interesse attuale. In molti casi pratici il vettore di stato del sistema non è disponibile, e una serie temporale rappresenta l’unica informazione a disposizione. L’analisi di serie storiche è un metodo di analisi dei dati provenienti da serie temporali che ha lo scopo di estrarre delle statistiche significative e altre caratteristiche dei dati, e di ottenere una comprensione della struttura e dei fattori fondamentali che hanno prodotto i dati osservati. Per esempio, un problema dei reattori a fusione termonucleare controllata è l’analisi di serie storiche della radiazione Dα, caratteristica del fenomeno chiamato Edge Localized Modes (ELMs). La comprensione e il 16 controllo degli ELMs sono problemi cruciali per il funzionamento di ITER, in cui il type-I ELMy H-mode è stato scelto come scenario di funzionamento standard. Determinare se la dinamica degli ELM sia caotica o casuale è cruciale per la corretta descrizione dell’ELM cycle. La caratterizzazione dinamica effettuata sulle serie temporali ricorrendo al cosiddetto spazio di embedding, può essere utilizzata per distinguere serie random da serie caotiche. Uno dei problemi più frequenti che si incontra nell’analisi di serie storiche sperimentali è la presenza di rumore, che in alcuni casi può raggiungere anche il 10% o il 20% del segnale. È quindi essenziale , prima di ogni analisi, sviluppare una tecnica appropriata e robusta per il denosing. Quando il modello del sistema è noto, l’analisi di serie storiche può essere applicata al rilevamento di guasti. Questo problema può essere formalizzato come un problema di identificazione dei parametri. In questi casi, la teorie dell’algebra differenziale fornisce utili informazioni circa la natura dei rapporti fra l’osservabile scalare, le variabili di stato e gli altri parametri del sistema. La sintesi di sistemi caotici è un problema fondamentale e interessante. Questi sistemi non implicano soltanto un metodo di realizzazione di modelli matematici esistenti ma anche di importanti sistemi fisici reali. La maggior parte dei metodi presentati in letteratura dimostra numericamente la presenza di dinamiche caotiche, per mezzo del calcolo degli esponenti di Lyapunov. In particolare, le dinamiche ipercaotiche sono identificate dalla presenza di due esponenti di Lyapunov positivi

    LSTM Networks for Detection and Classification of Anomalies in Raw Sensor Data

    Get PDF
    In order to ensure the validity of sensor data, it must be thoroughly analyzed for various types of anomalies. Traditional machine learning methods of anomaly detections in sensor data are based on domain-specific feature engineering. A typical approach is to use domain knowledge to analyze sensor data and manually create statistics-based features, which are then used to train the machine learning models to detect and classify the anomalies. Although this methodology is used in practice, it has a significant drawback due to the fact that feature extraction is usually labor intensive and requires considerable effort from domain experts. An alternative approach is to use deep learning algorithms. Research has shown that modern deep neural networks are very effective in automated extraction of abstract features from raw data in classification tasks. Long short-term memory networks, or LSTMs in short, are a special kind of recurrent neural networks that are capable of learning long-term dependencies. These networks have proved to be especially effective in the classification of raw time-series data in various domains. This dissertation systematically investigates the effectiveness of the LSTM model for anomaly detection and classification in raw time-series sensor data. As a proof of concept, this work used time-series data of sensors that measure blood glucose levels. A large number of time-series sequences was created based on a genuine medical diabetes dataset. Anomalous series were constructed by six methods that interspersed patterns of common anomaly types in the data. An LSTM network model was trained with k-fold cross-validation on both anomalous and valid series to classify raw time-series sequences into one of seven classes: non-anomalous, and classes corresponding to each of the six anomaly types. As a control, the accuracy of detection and classification of the LSTM was compared to that of four traditional machine learning classifiers: support vector machines, Random Forests, naive Bayes, and shallow neural networks. The performance of all the classifiers was evaluated based on nine metrics: precision, recall, and the F1-score, each measured in micro, macro and weighted perspective. While the traditional models were trained on vectors of features, derived from the raw data, that were based on knowledge of common sources of anomaly, the LSTM was trained on raw time-series data. Experimental results indicate that the performance of the LSTM was comparable to the best traditional classifiers by achieving 99% accuracy in all 9 metrics. The model requires no labor-intensive feature engineering, and the fine-tuning of its architecture and hyper-parameters can be made in a fully automated way. This study, therefore, finds LSTM networks an effective solution to anomaly detection and classification in sensor data
    • …
    corecore