2,666 research outputs found
Evolving Ensemble Fuzzy Classifier
The concept of ensemble learning offers a promising avenue in learning from
data streams under complex environments because it addresses the bias and
variance dilemma better than its single model counterpart and features a
reconfigurable structure, which is well suited to the given context. While
various extensions of ensemble learning for mining non-stationary data streams
can be found in the literature, most of them are crafted under a static base
classifier and revisits preceding samples in the sliding window for a
retraining step. This feature causes computationally prohibitive complexity and
is not flexible enough to cope with rapidly changing environments. Their
complexities are often demanding because it involves a large collection of
offline classifiers due to the absence of structural complexities reduction
mechanisms and lack of an online feature selection mechanism. A novel evolving
ensemble classifier, namely Parsimonious Ensemble pENsemble, is proposed in
this paper. pENsemble differs from existing architectures in the fact that it
is built upon an evolving classifier from data streams, termed Parsimonious
Classifier pClass. pENsemble is equipped by an ensemble pruning mechanism,
which estimates a localized generalization error of a base classifier. A
dynamic online feature selection scenario is integrated into the pENsemble.
This method allows for dynamic selection and deselection of input features on
the fly. pENsemble adopts a dynamic ensemble structure to output a final
classification decision where it features a novel drift detection scenario to
grow the ensemble structure. The efficacy of the pENsemble has been numerically
demonstrated through rigorous numerical studies with dynamic and evolving data
streams where it delivers the most encouraging performance in attaining a
tradeoff between accuracy and complexity.Comment: this paper has been published by IEEE Transactions on Fuzzy System
Online Tool Condition Monitoring Based on Parsimonious Ensemble+
Accurate diagnosis of tool wear in metal turning process remains an open
challenge for both scientists and industrial practitioners because of
inhomogeneities in workpiece material, nonstationary machining settings to suit
production requirements, and nonlinear relations between measured variables and
tool wear. Common methodologies for tool condition monitoring still rely on
batch approaches which cannot cope with a fast sampling rate of metal cutting
process. Furthermore they require a retraining process to be completed from
scratch when dealing with a new set of machining parameters. This paper
presents an online tool condition monitoring approach based on Parsimonious
Ensemble+, pENsemble+. The unique feature of pENsemble+ lies in its highly
flexible principle where both ensemble structure and base-classifier structure
can automatically grow and shrink on the fly based on the characteristics of
data streams. Moreover, the online feature selection scenario is integrated to
actively sample relevant input attributes. The paper presents advancement of a
newly developed ensemble learning algorithm, pENsemble+, where online active
learning scenario is incorporated to reduce operator labelling effort. The
ensemble merging scenario is proposed which allows reduction of ensemble
complexity while retaining its diversity. Experimental studies utilising
real-world manufacturing data streams and comparisons with well known
algorithms were carried out. Furthermore, the efficacy of pENsemble was
examined using benchmark concept drift data streams. It has been found that
pENsemble+ incurs low structural complexity and results in a significant
reduction of operator labelling effort.Comment: this paper has been published by IEEE Transactions on Cybernetic
An investigation into adaptive power reduction techniques for neural hardware
In light of the growing applicability of Artificial Neural Network (ANN) in the signal processing field [1] and the present thrust of the semiconductor industry towards lowpower SOCs for mobile devices [2], the power consumption of ANN hardware has become a very important implementation issue. Adaptability is a powerful and useful feature of neural networks. All current approaches for low-power ANN hardware techniques are ‘non-adaptive’ with respect to the power consumption of the network (i.e. power-reduction is not an objective of the adaptation/learning process). In the research work presented in this thesis, investigations on possible adaptive power reduction techniques have been carried out, which attempt to exploit the adaptability of neural networks in order to reduce the power consumption. Three separate approaches for such adaptive power reduction are proposed: adaptation of size, adaptation of network weights and adaptation of calculation precision. Initial case studies exhibit promising results with significantpower reduction
Deep Stacked Stochastic Configuration Networks for Lifelong Learning of Non-Stationary Data Streams
The concept of SCN offers a fast framework with universal approximation
guarantee for lifelong learning of non-stationary data streams. Its adaptive
scope selection property enables for proper random generation of hidden unit
parameters advancing conventional randomized approaches constrained with a
fixed scope of random parameters. This paper proposes deep stacked stochastic
configuration network (DSSCN) for continual learning of non-stationary data
streams which contributes two major aspects: 1) DSSCN features a
self-constructing methodology of deep stacked network structure where hidden
unit and hidden layer are extracted automatically from continuously generated
data streams; 2) the concept of SCN is developed to randomly assign inverse
covariance matrix of multivariate Gaussian function in the hidden node addition
step bypassing its computationally prohibitive tuning phase. Numerical
evaluation and comparison with prominent data stream algorithms under two
procedures: periodic hold-out and prequential test-then-train processes
demonstrate the advantage of proposed methodology.Comment: This paper has been published in Information Science
Dynamic Distribution Pruning for Efficient Network Architecture Search
Network architectures obtained by Neural Architecture Search (NAS) have shown
state-of-the-art performance in various computer vision tasks. Despite the
exciting progress, the computational complexity of the forward-backward
propagation and the search process makes it difficult to apply NAS in practice.
In particular, most previous methods require thousands of GPU days for the
search process to converge. In this paper, we propose a dynamic distribution
pruning method towards extremely efficient NAS, which samples architectures
from a joint categorical distribution. The search space is dynamically pruned
every a few epochs to update this distribution, and the optimal neural
architecture is obtained when there is only one structure remained. We conduct
experiments on two widely-used datasets in NAS. On CIFAR-10, the optimal
structure obtained by our method achieves the state-of-the-art \% test
error, while the search process is more than times faster (only
GPU hours on a Tesla V100) than the state-of-the-art NAS algorithms. On
ImageNet, our model achieves 75.2\% top-1 accuracy under the MobileNet
settings, with a time cost of only GPU days that is acceleration
over the fastest NAS algorithm. The code is available at \url{
https://github.com/tanglang96/DDPNAS
Autonomously Reconfigurable Artificial Neural Network on a Chip
Artificial neural network (ANN), an established bio-inspired computing paradigm, has proved very effective in a variety of real-world problems and particularly useful for various emerging biomedical applications using specialized ANN hardware. Unfortunately, these ANN-based systems are increasingly vulnerable to both transient and permanent faults due to unrelenting advances in CMOS technology scaling, which sometimes can be catastrophic. The considerable resource and energy consumption and the lack of dynamic adaptability make conventional fault-tolerant techniques unsuitable for future portable medical solutions. Inspired by the self-healing and self-recovery mechanisms of human nervous system, this research seeks to address reliability issues of ANN-based hardware by proposing an Autonomously Reconfigurable Artificial Neural Network (ARANN) architectural framework. Leveraging the homogeneous structural characteristics of neural networks, ARANN is capable of adapting its structures and operations, both algorithmically and microarchitecturally, to react to unexpected neuron failures. Specifically, we propose three key techniques --- Distributed ANN, Decoupled Virtual-to-Physical Neuron Mapping, and Dual-Layer Synchronization --- to achieve cost-effective structural adaptation and ensure accurate system recovery. Moreover, an ARANN-enabled self-optimizing workflow is presented to adaptively explore a "Pareto-optimal" neural network structure for a given application, on the fly. Implemented and demonstrated on a Virtex-5 FPGA, ARANN can cover and adapt 93% chip area (neurons) with less than 1% chip overhead and O(n) reconfiguration latency. A detailed performance analysis has been completed based on various recovery scenarios
Thermal Neural Networks: Lumped-Parameter Thermal Modeling With State-Space Machine Learning
With electric power systems becoming more compact and increasingly powerful,
the relevance of thermal stress especially during overload operation is
expected to increase ceaselessly. Whenever critical temperatures cannot be
measured economically on a sensor base, a thermal model lends itself to
estimate those unknown quantities. Thermal models for electric power systems
are usually required to be both, real-time capable and of high estimation
accuracy. Moreover, ease of implementation and time to production play an
increasingly important role. In this work, the thermal neural network (TNN) is
introduced, which unifies both, consolidated knowledge in the form of
heat-transfer-based lumped-parameter models, and data-driven nonlinear function
approximation with supervised machine learning. A quasi-linear
parameter-varying system is identified solely from empirical data, where
relationships between scheduling variables and system matrices are inferred
statistically and automatically. At the same time, a TNN has physically
interpretable states through its state-space representation, is end-to-end
trainable -- similar to deep learning models -- with automatic differentiation,
and requires no material, geometry, nor expert knowledge for its design.
Experiments on an electric motor data set show that a TNN achieves higher
temperature estimation accuracies than previous white-/grey- or black-box
models with a mean squared error of and a worst-case error of
at 64 model parameters.Comment: Preprint; Fix typos, streamline math. notation; 10 page
- …