426 research outputs found

    Variable neural networks for adaptive control of nonlinear systems

    Get PDF
    This paper is concerned with the adaptive control of continuous-time nonlinear dynamical systems using neural networks. A novel neural network architecture, referred to as a variable neural network, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable neural networks, the number of basis functions can be either increased or decreased with time, according to specified design strategies, so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function (GRBF) variable neural network, an adaptive control scheme is presented. The location of the centers and the determination of the widths of the GRBFs in the variable neural network are analyzed to make a compromise between orthogonality and smoothness. The weight-adaptive laws developed using the Lyapunov synthesis approach guarantee the stability of the overall control scheme, even in the presence of modeling error(s). The tracking errors converge to the required accuracy through the adaptive control algorithm derived by combining the variable neural network and Lyapunov synthesis techniques. The operation of an adaptive control scheme using the variable neural network is demonstrated using two simulated example

    Unconstraining Graph-Constrained Group Testing

    Get PDF
    In network tomography, one goal is to identify a small set of failed links in a network using as little information as possible. One way of setting up this problem is called graph-constrained group testing. Graph-constrained group testing is a variant of the classical combinatorial group testing problem, where the tests that one is allowed are additionally constrained by a graph. In this case, the graph is given by the underlying network topology. The main contribution of this work is to show that for most graphs, the constraints imposed by the graph are no constraint at all. That is, the number of tests required to identify the failed links in graph-constrained group testing is near-optimal even for the corresponding group testing problem with no graph constraints. Our approach is based on a simple randomized construction of tests. To analyze our construction, we prove new results about the size of giant components in randomly sparsified graphs. Finally, we provide empirical results which suggest that our connected-subgraph tests perform better not just in theory but also in practice, and in particular perform better on a real-world network topology

    Context-Aware Performance Benchmarking of a Fleet of Industrial Assets

    Get PDF
    Industrial assets are instrumented with sensors, connected and continuously monitored. The collected data, generally in form of time-series, is used for corrective and preventive maintenance. More advanced exploitation of this data for very diverse purposes, e.g. identifying underperformance, operational optimization or predictive maintenance, is currently an active area of research. The general methods used to analyze the time-series lead to models that are either too simple to be used in complex operational contexts or too difficult to be generalized to the whole fleet due to their asset-specific nature. Therefore, we have conceived an alternative methodology allowing to better characterize the operational context of an asset and quantify the impact on its performance. The proposed methodology allows to benchmark and profile fleet assets in a context-aware fashion, is applicable in multiple domains (even without ground truth). The methodology is evaluated on real-world data coming from a fleet of wind turbines and compared to the standard approach used in the domain. We also illustrate how the asset performance (in terms of energy production) is influenced by the operational context (in terms of environmental conditions). Moreover, we investigate how the same operational context impacts the performance of the different assets in the fleet and how groups of similarly behaving assets can be determined

    Automated Detection of Autism Spectrum Disorder Using Bio-Inspired Swarm Intelligence Based Feature Selection and Classification Techniques

    Get PDF
    Autism spectrum disorders, or ASDs, are neurological conditions that affect humans. ASDs typically come with sensory issues like sensitivity to touch or soundor odour. Though genetics are the main causes, their  early discovery and treatments are imperative. In recent years, intelligent diagnosis using MLTs (Machine Learning Techniques) have been developed to support conventional clinical methods in the domain of healthcare. Feature selections from healthcare databases consume nondeterministic polynomial timesand are hard tasks where again MLTs have been of great use. AGWOs (Adaptive Grey Wolf Optimizations) were used in this study to determine most significant features and efficient classification strategies in datasets of ASDs. Initially,  pre-processing strategies based on SMOTEs (Synthetic Minority Oversampling Techniques) removed extraneous data from ASD datasets and subsequently AGWOs  repeat this procedure to find smallest features with maximum classifications values for recall and accuracy. Finally, KVSMs (Kernel Support Vector Machines) classify instances of ASDs from the input datasets. The experimental results of suggested method are evaluated for classifying ASDs from datasets instances of Toddlers, Children, Adolescents, and Adults in terms of recalls, precisions, F-measures, and classification errors

    ITER: An algorithm for predictive regression rule extraction. Data warehousing and knowledge discovery. Proceedings.

    Get PDF
    Various benchmarking studies have shown that artificial neural networks and support vector machines have a superior performance when compared to more traditional machine learning techniques. The main resistance against these newer techniques is based on their lack of interpretability: it is difficult for the human analyst to understand the motivation behind these models' decisions. Various rule extraction techniques have been proposed to overcome this opacity restriction. However, most of these extraction techniques are devised for classification and only few algorithms can deal with regression problems.

    Real-valued feature selection for process approximation and prediction

    Get PDF
    The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features

    Parallel Architectures for Planetary Exploration Requirements (PAPER)

    Get PDF
    The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified

    Hypercube-Based Methods for Symbolic Knowledge Extraction: Towards a Unified Model

    Get PDF
    Symbolic knowledge-extraction (SKE) algorithms proposed by the XAI community to obtain human-intelligible explanations for opaque machine learning predictors are currently being studied and developed with growing interest, also in order to achieve believability in interactions. However, choosing the most adequate extraction procedure amongst the many existing in the literature is becoming more and more challenging, as the amount of available methods increases. In fact, most of the proposed algorithms come with constraints over their applicability. In this paper we focus upon a quite general class of SKE techniques, namely hypercube-based methods. Despite being commonly considered regression-specific, we discuss why hypercube-based SKE methods are flexible enough to deal with classification problems as well. More generally, we propose a common generalised model for hypercube-based methods, and we show how they can be exploited to perform SKE on datasets, predictors, or learning tasks of any sort. We also report as a concrete example the implementation of the proposed generalisation in the PSyKE framework
    corecore