6,286 research outputs found
An Adaptive Locally Connected Neuron Model: Focusing Neuron
This paper presents a new artificial neuron model capable of learning its
receptive field in the topological domain of inputs. The model provides
adaptive and differentiable local connectivity (plasticity) applicable to any
domain. It requires no other tool than the backpropagation algorithm to learn
its parameters which control the receptive field locations and apertures. This
research explores whether this ability makes the neuron focus on informative
inputs and yields any advantage over fully connected neurons. The experiments
include tests of focusing neuron networks of one or two hidden layers on
synthetic and well-known image recognition data sets. The results demonstrated
that the focusing neurons can move their receptive fields towards more
informative inputs. In the simple two-hidden layer networks, the focusing
layers outperformed the dense layers in the classification of the 2D spatial
data sets. Moreover, the focusing networks performed better than the dense
networks even when 70 of the weights were pruned. The tests on
convolutional networks revealed that using focusing layers instead of dense
layers for the classification of convolutional features may work better in some
data sets.Comment: 45 pages, a national patent filed, submitted to Turkish Patent
Office, No: -2017/17601, Date: 09.11.201
Biological Signals Identification by a Dynamic Recurrent Neural Network: from Oculomotor Neural Integrator to Complex Human Movements and Locomotion
info:eu-repo/semantics/publishe
A cognitive based Intrusion detection system
Intrusion detection is one of the primary mechanisms to provide computer
networks with security. With an increase in attacks and growing dependence on
various fields such as medicine, commercial, and engineering to give services
over a network, securing networks have become a significant issue. The purpose
of Intrusion Detection Systems (IDS) is to make models which can recognize
regular communications from abnormal ones and take necessary actions. Among
different methods in this field, Artificial Neural Networks (ANNs) have been
widely used. However, ANN-based IDS, has two main disadvantages: 1- Low
detection precision. 2- Weak detection stability. To overcome these issues,
this paper proposes a new approach based on Deep Neural Network (DNN. The
general mechanism of our model is as follows: first, some of the data in
dataset is properly ranked, afterwards, dataset is normalized with Min-Max
normalizer to fit in the limited domain. Then dimensionality reduction is
applied to decrease the amount of both useless dimensions and computational
cost. After the preprocessing part, Mean-Shift clustering algorithm is the used
to create different subsets and reduce the complexity of dataset. Based on each
subset, two models are trained by Support Vector Machine (SVM) and deep
learning method. Between two models for each subset, the model with a higher
accuracy is chosen. This idea is inspired from philosophy of divide and
conquer. Hence, the DNN can learn each subset quickly and robustly. Finally, to
reduce the error from the previous step, an ANN model is trained to gain and
use the results in order to be able to predict the attacks. We can reach to
95.4 percent of accuracy. Possessing a simple structure and less number of
tunable parameters, the proposed model still has a grand generalization with a
high level of accuracy in compared to other methods such as SVM, Bayes network,
and STL.Comment: 18 pages, 6 figure
Word class representations spontaneously emerge in a deep neural network trained on next word prediction
How do humans learn language, and can the first language be learned at all?
These fundamental questions are still hotly debated. In contemporary
linguistics, there are two major schools of thought that give completely
opposite answers. According to Chomsky's theory of universal grammar, language
cannot be learned because children are not exposed to sufficient data in their
linguistic environment. In contrast, usage-based models of language assume a
profound relationship between language structure and language use. In
particular, contextual mental processing and mental representations are assumed
to have the cognitive capacity to capture the complexity of actual language use
at all levels. The prime example is syntax, i.e., the rules by which words are
assembled into larger units such as sentences. Typically, syntactic rules are
expressed as sequences of word classes. However, it remains unclear whether
word classes are innate, as implied by universal grammar, or whether they
emerge during language acquisition, as suggested by usage-based approaches.
Here, we address this issue from a machine learning and natural language
processing perspective. In particular, we trained an artificial deep neural
network on predicting the next word, provided sequences of consecutive words as
input. Subsequently, we analyzed the emerging activation patterns in the hidden
layers of the neural network. Strikingly, we find that the internal
representations of nine-word input sequences cluster according to the word
class of the tenth word to be predicted as output, even though the neural
network did not receive any explicit information about syntactic rules or word
classes during training. This surprising result suggests, that also in the
human brain, abstract representational categories such as word classes may
naturally emerge as a consequence of predictive coding and processing during
language acquisition.Comment: arXiv admin note: text overlap with arXiv:2301.0675
- …