1,225 research outputs found
An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams
Existing FNNs are mostly developed under a shallow network configuration
having lower generalization power than those of deep structures. This paper
proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be
automatically extracted from data streams or removed if they play limited role
during their lifespan. The structure of the network can be deepened on demand
by stacking additional layers using a drift detection method which not only
detects the covariate drift, variations of input space, but also accurately
identifies the real drift, dynamic changes of both feature space and target
space. DEVFNN is developed under the stacked generalization principle via the
feature augmentation concept where a recently developed algorithm, namely
gClass, drives the hidden layer. It is equipped by an automatic feature
selection method which controls activation and deactivation of input attributes
to induce varying subsets of input features. A deep network simplification
procedure is put forward using the concept of hidden layer merging to prevent
uncontrollable growth of dimensionality of input space due to the nature of
feature augmentation approach in building a deep network structure. DEVFNN
works in the sample-wise fashion and is compatible for data stream
applications. The efficacy of DEVFNN has been thoroughly evaluated using seven
datasets with non-stationary properties under the prequential test-then-train
protocol. It has been compared with four popular continual learning algorithms
and its shallow counterpart where DEVFNN demonstrates improvement of
classification accuracy. Moreover, it is also shown that the concept drift
detection method is an effective tool to control the depth of network structure
while the hidden layer merging scenario is capable of simplifying the network
complexity of a deep network with negligible compromise of generalization
performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System
The application of neural networks in active suspension
This thesis considers the application of neural networks to automotive suspension
systems. In particular their ability to learn non-linear feedback control
relationships. The speed of processing, once trained, means that neural networks
open up new opportunities and allow increased complexity in the control
strategies employed.
The suitability of neural networks for this task is demonstrated here using multilayer
perceptron, (MLP) feed forward neural networks applied to a quarter vehicle
simulation model. Initially neural networks are trained from a training data set
created using a non-linear optimal control strategy, the complexity of which
prohibits its direct use. They are shown to be successful in learning the
relationship between the current system states and the optimal control. [Continues.
Neuro-fuzzy software for intelligent control and education
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Automação). Faculdade de Engenharia. Universidade do Porto. 200
Application of Spectral Solution and Neural Network Techniques in Plasma Modeling for Electric Propulsion
A solver for Poisson\u27s equation was developed using the Radix-2 FFT method first invented by Carl Friedrich Gauss. Its performance was characterized using simulated data and identical boundary conditions to those found in a Hall Effect Thruster. The characterization showed errors below machine-zero with noise-free data, and above 20% noise-to-signal strength, the error increased linearly with the noise. This solver can be implemented into AFRL\u27s plasma simulator, the Thermophysics Universal Research Framework (TURF) and used to quickly and accurately compute the electric field based on charge distributions. The validity of a machine learning approach and data-based complex system modeling approach was demonstrated. To this end, several multilayer perceptrons were created and validated against AFRL-provided Hall Thruster test data, with two networks showing mean error below 1% and standard deviations below 10%. These results, while not ready for implementation as a replacement for lookup tables, strongly suggest paths for future work and the development of networks that would be acceptable in such a role, saving both RAM space and time in plasma simulations
Evolving Ensemble Fuzzy Classifier
The concept of ensemble learning offers a promising avenue in learning from
data streams under complex environments because it addresses the bias and
variance dilemma better than its single model counterpart and features a
reconfigurable structure, which is well suited to the given context. While
various extensions of ensemble learning for mining non-stationary data streams
can be found in the literature, most of them are crafted under a static base
classifier and revisits preceding samples in the sliding window for a
retraining step. This feature causes computationally prohibitive complexity and
is not flexible enough to cope with rapidly changing environments. Their
complexities are often demanding because it involves a large collection of
offline classifiers due to the absence of structural complexities reduction
mechanisms and lack of an online feature selection mechanism. A novel evolving
ensemble classifier, namely Parsimonious Ensemble pENsemble, is proposed in
this paper. pENsemble differs from existing architectures in the fact that it
is built upon an evolving classifier from data streams, termed Parsimonious
Classifier pClass. pENsemble is equipped by an ensemble pruning mechanism,
which estimates a localized generalization error of a base classifier. A
dynamic online feature selection scenario is integrated into the pENsemble.
This method allows for dynamic selection and deselection of input features on
the fly. pENsemble adopts a dynamic ensemble structure to output a final
classification decision where it features a novel drift detection scenario to
grow the ensemble structure. The efficacy of the pENsemble has been numerically
demonstrated through rigorous numerical studies with dynamic and evolving data
streams where it delivers the most encouraging performance in attaining a
tradeoff between accuracy and complexity.Comment: this paper has been published by IEEE Transactions on Fuzzy System
An instruction systolic array architecture for multiple neural network types
Modern electronic systems, especially sensor and imaging systems, are beginning to
incorporate their own neural network subsystems. In order for these neural systems to learn in
real-time they must be implemented using VLSI technology, with as much of the learning
processes incorporated on-chip as is possible. The majority of current VLSI implementations
literally implement a series of neural processing cells, which can be connected together in an
arbitrary fashion. Many do not perform the entire neural learning process on-chip, instead
relying on other external systems to carry out part of the computation requirements of the
algorithm.
The work presented here utilises two dimensional instruction systolic arrays in an attempt to
define a general neural architecture which is closer to the biological basis of neural networks - it
is the synapses themselves, rather than the neurons, that have dedicated processing units. A
unified architecture is described which can be programmed at the microcode level in order to
facilitate the processing of multiple neural network types.
An essential part of neural network processing is the neuron activation function, which can
range from a sequential algorithm to a discrete mathematical expression. The architecture
presented can easily carry out the sequential functions, and introduces a fast method of
mathematical approximation for the more complex functions. This can be evaluated on-chip,
thus implementing the entire neural process within a single system.
VHDL circuit descriptions for the chip have been generated, and the systolic processing
algorithms and associated microcode instruction set for three different neural paradigms have
been designed. A software simulator of the architecture has been written, giving results for
several common applications in the field
Assesment, integration and implementation of computationally efficient models to simulate biological neuronal networks on parallel hardware
The simulation of large-scale biological spiking neural networks (SNN) is computationally onerous. In this work we develop a simulation tool exploiting the computational capabilities of modern graphic processors (GPUs) to speed up simulations of SNNs closely mimicking physiological phenomena. Different models of these phenomena are analyzed, and the best in terms of predictions and compatibility with the SIMD architecture of GPUs are implemented. The performances of the simulator are evaluated
Automated Feature Engineering for Deep Neural Networks with Genetic Programming
Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model’s predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm’s engineered features
- …