811 research outputs found
Self-Organized Operational Neural Networks for Severe Image Restoration Problems
Discriminative learning based on convolutional neural networks (CNNs) aims to
perform image restoration by learning from training examples of noisy-clean
image pairs. It has become the go-to methodology for tackling image restoration
and has outperformed the traditional non-local class of methods. However, the
top-performing networks are generally composed of many convolutional layers and
hundreds of neurons, with trainable parameters in excess of several millions.
We claim that this is due to the inherent linear nature of convolution-based
transformation, which is inadequate for handling severe restoration problems.
Recently, a non-linear generalization of CNNs, called the operational neural
networks (ONN), has been shown to outperform CNN on AWGN denoising. However,
its formulation is burdened by a fixed collection of well-known nonlinear
operators and an exhaustive search to find the best possible configuration for
a given architecture, whose efficacy is further limited by a fixed output layer
operator assignment. In this study, we leverage the Taylor series-based
function approximation to propose a self-organizing variant of ONNs, Self-ONNs,
for image restoration, which synthesizes novel nodal transformations onthe-fly
as part of the learning process, thus eliminating the need for redundant
training runs for operator search. In addition, it enables a finer level of
operator heterogeneity by diversifying individual connections of the receptive
fields and weights. We perform a series of extensive ablation experiments
across three severe image restoration tasks. Even when a strict equivalence of
learnable parameters is imposed, Self-ONNs surpass CNNs by a considerable
margin across all problems, improving the generalization performance by up to 3
dB in terms of PSNR
Fleet Prognosis with Physics-informed Recurrent Neural Networks
Services and warranties of large fleets of engineering assets is a very
profitable business. The success of companies in that area is often related to
predictive maintenance driven by advanced analytics. Therefore, accurate
modeling, as a way to understand how the complex interactions between operating
conditions and component capability define useful life, is key for services
profitability. Unfortunately, building prognosis models for large fleets is a
daunting task as factors such as duty cycle variation, harsh environments,
inadequate maintenance, and problems with mass production can lead to large
discrepancies between designed and observed useful lives. This paper introduces
a novel physics-informed neural network approach to prognosis by extending
recurrent neural networks to cumulative damage models. We propose a new
recurrent neural network cell designed to merge physics-informed and
data-driven layers. With that, engineers and scientists have the chance to use
physics-informed layers to model parts that are well understood (e.g., fatigue
crack growth) and use data-driven layers to model parts that are poorly
characterized (e.g., internal loads). A simple numerical experiment is used to
present the main features of the proposed physics-informed recurrent neural
network for damage accumulation. The test problem consist of predicting fatigue
crack length for a synthetic fleet of airplanes subject to different mission
mixes. The model is trained using full observation inputs (far-field loads) and
very limited observation of outputs (crack length at inspection for only a
portion of the fleet). The results demonstrate that our proposed hybrid
physics-informed recurrent neural network is able to accurately model fatigue
crack growth even when the observed distribution of crack length does not match
with the (unobservable) fleet distribution.Comment: Data and codes (including our implementation for both the multi-layer
perceptron, the stress intensity and Paris law layers, the cumulative damage
cell, as well as python driver scripts) used in this manuscript are publicly
available on GitHub at https://github.com/PML-UCF/pinn. The data and code are
released under the MIT Licens
Operational Neural Networks
Feed-forward, fully-connected Artificial Neural Networks (ANNs) or the
so-called Multi-Layer Perceptrons (MLPs) are well-known universal
approximators. However, their learning performance varies significantly
depending on the function or the solution space that they attempt to
approximate. This is mainly because of their homogenous configuration based
solely on the linear neuron model. Therefore, while they learn very well those
problems with a monotonous, relatively simple and linearly separable solution
space, they may entirely fail to do so when the solution space is highly
nonlinear and complex. Sharing the same linear neuron model with two additional
constraints (local connections and weight sharing), this is also true for the
conventional Convolutional Neural Networks (CNNs) and, it is, therefore, not
surprising that in many challenging problems only the deep CNNs with a massive
complexity and depth can achieve the required diversity and the learning
performance. In order to address this drawback and also to accomplish a more
generalized model over the convolutional neurons, this study proposes a novel
network model, called Operational Neural Networks (ONNs), which can be
heterogeneous and encapsulate neurons with any set of operators to boost
diversity and to learn highly complex and multi-modal functions or spaces with
minimal network complexity and training data. Finally, a novel training method
is formulated to back-propagate the error through the operational layers of
ONNs. Experimental results over highly challenging problems demonstrate the
superior learning capabilities of ONNs even with few neurons and hidden layers.Comment: 21 page
Exploiting Heterogeneity in Operational Neural Networks by Synaptic Plasticity
The recently proposed network model, Operational Neural Networks (ONNs), can
generalize the conventional Convolutional Neural Networks (CNNs) that are
homogenous only with a linear neuron model. As a heterogenous network model,
ONNs are based on a generalized neuron model that can encapsulate any set of
non-linear operators to boost diversity and to learn highly complex and
multi-modal functions or spaces with minimal network complexity and training
data. However, the default search method to find optimal operators in ONNs, the
so-called Greedy Iterative Search (GIS) method, usually takes several training
sessions to find a single operator set per layer. This is not only
computationally demanding, also the network heterogeneity is limited since the
same set of operators will then be used for all neurons in each layer. To
address this deficiency and exploit a superior level of heterogeneity, in this
study the focus is drawn on searching the best-possible operator set(s) for the
hidden neurons of the network based on the Synaptic Plasticity paradigm that
poses the essential learning theory in biological neurons. During training,
each operator set in the library can be evaluated by their synaptic plasticity
level, ranked from the worst to the best, and an elite ONN can then be
configured using the top ranked operator sets found at each hidden layer.
Experimental results over highly challenging problems demonstrate that the
elite ONNs even with few neurons and layers can achieve a superior learning
performance than GIS-based ONNs and as a result the performance gap over the
CNNs further widens.Comment: 15 pages, 19 figures, journal manuscrip
Zero-Shot Motor Health Monitoring by Blind Domain Transition
Continuous long-term monitoring of motor health is crucial for the early
detection of abnormalities such as bearing faults (up to 51% of motor failures
are attributed to bearing faults). Despite numerous methodologies proposed for
bearing fault detection, most of them require normal (healthy) and abnormal
(faulty) data for training. Even with the recent deep learning (DL)
methodologies trained on the labeled data from the same machine, the
classification accuracy significantly deteriorates when one or few conditions
are altered. Furthermore, their performance suffers significantly or may
entirely fail when they are tested on another machine with entirely different
healthy and faulty signal patterns. To address this need, in this pilot study,
we propose a zero-shot bearing fault detection method that can detect any fault
on a new (target) machine regardless of the working conditions, sensor
parameters, or fault characteristics. To accomplish this objective, a 1D
Operational Generative Adversarial Network (Op-GAN) first characterizes the
transition between normal and fault vibration signals of (a) source machine(s)
under various conditions, sensor parameters, and fault types. Then for a target
machine, the potential faulty signals can be generated, and over its actual
healthy and synthesized faulty signals, a compact, and lightweight 1D Self-ONN
fault detector can then be trained to detect the real faulty condition in real
time whenever it occurs. To validate the proposed approach, a new benchmark
dataset is created using two different motors working under different
conditions and sensor locations. Experimental results demonstrate that this
novel approach can accurately detect any bearing fault achieving an average
recall rate of around 89% and 95% on two target machines regardless of its
type, severity, and location.Comment: 13 pages, 9 figures, Journa
Self-Organized Operational Neural Networks with Generative Neurons
Operational Neural Networks (ONNs) have recently been proposed to address the
well-known limitations and drawbacks of conventional Convolutional Neural
Networks (CNNs) such as network homogeneity with the sole linear neuron model.
ONNs are heterogenous networks with a generalized neuron model that can
encapsulate any set of non-linear operators to boost diversity and to learn
highly complex and multi-modal functions or spaces with minimal network
complexity and training data. However, Greedy Iterative Search (GIS) method,
which is the search method used to find optimal operators in ONNs takes many
training sessions to find a single operator set per layer. This is not only
computationally demanding, but the network heterogeneity is also limited since
the same set of operators will then be used for all neurons in each layer.
Moreover, the performance of ONNs directly depends on the operator set library
used, which introduces a certain risk of performance degradation especially
when the optimal operator set required for a particular task is missing from
the library. In order to address these issues and achieve an ultimate
heterogeneity level to boost the network diversity along with computational
efficiency, in this study we propose Self-organized ONNs (Self-ONNs) with
generative neurons that have the ability to adapt (optimize) the nodal operator
of each connection during the training process. Therefore, Self-ONNs can have
an utmost heterogeneity level required by the learning problem at hand.
Moreover, this ability voids the need of having a fixed operator set library
and the prior operator search within the library in order to find the best
possible set of operators. We further formulate the training method to
back-propagate the error through the operational layers of Self-ONNs.Comment: 14 pages, 14 figures, journal articl
Blind Restoration of Real-World Audio by 1D Operational GANs
Objective: Despite numerous studies proposed for audio restoration in the
literature, most of them focus on an isolated restoration problem such as
denoising or dereverberation, ignoring other artifacts. Moreover, assuming a
noisy or reverberant environment with limited number of fixed
signal-to-distortion ratio (SDR) levels is a common practice. However,
real-world audio is often corrupted by a blend of artifacts such as
reverberation, sensor noise, and background audio mixture with varying types,
severities, and duration. In this study, we propose a novel approach for blind
restoration of real-world audio signals by Operational Generative Adversarial
Networks (Op-GANs) with temporal and spectral objective metrics to enhance the
quality of restored audio signal regardless of the type and severity of each
artifact corrupting it. Methods: 1D Operational-GANs are used with generative
neuron model optimized for blind restoration of any corrupted audio signal.
Results: The proposed approach has been evaluated extensively over the
benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with
a random blend of artifacts each with a random severity to mimic real-world
audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved,
respectively, which are substantial when compared with the baseline methods.
Significance: This is a pioneer study in blind audio restoration with the
unique capability of direct (time-domain) restoration of real-world audio
whilst achieving an unprecedented level of performance for a wide SDR range and
artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally
effective real-world audio restoration with significantly improved performance.
The source codes and the generated real-world audio datasets are shared
publicly with the research community in a dedicated GitHub repository1
- …