33,308 research outputs found
A cognitive based Intrusion detection system
Intrusion detection is one of the primary mechanisms to provide computer
networks with security. With an increase in attacks and growing dependence on
various fields such as medicine, commercial, and engineering to give services
over a network, securing networks have become a significant issue. The purpose
of Intrusion Detection Systems (IDS) is to make models which can recognize
regular communications from abnormal ones and take necessary actions. Among
different methods in this field, Artificial Neural Networks (ANNs) have been
widely used. However, ANN-based IDS, has two main disadvantages: 1- Low
detection precision. 2- Weak detection stability. To overcome these issues,
this paper proposes a new approach based on Deep Neural Network (DNN. The
general mechanism of our model is as follows: first, some of the data in
dataset is properly ranked, afterwards, dataset is normalized with Min-Max
normalizer to fit in the limited domain. Then dimensionality reduction is
applied to decrease the amount of both useless dimensions and computational
cost. After the preprocessing part, Mean-Shift clustering algorithm is the used
to create different subsets and reduce the complexity of dataset. Based on each
subset, two models are trained by Support Vector Machine (SVM) and deep
learning method. Between two models for each subset, the model with a higher
accuracy is chosen. This idea is inspired from philosophy of divide and
conquer. Hence, the DNN can learn each subset quickly and robustly. Finally, to
reduce the error from the previous step, an ANN model is trained to gain and
use the results in order to be able to predict the attacks. We can reach to
95.4 percent of accuracy. Possessing a simple structure and less number of
tunable parameters, the proposed model still has a grand generalization with a
high level of accuracy in compared to other methods such as SVM, Bayes network,
and STL.Comment: 18 pages, 6 figure
Unsupervised Network Pretraining via Encoding Human Design
Over the years, computer vision researchers have spent an immense amount of
effort on designing image features for the visual object recognition task. We
propose to incorporate this valuable experience to guide the task of training
deep neural networks. Our idea is to pretrain the network through the task of
replicating the process of hand-designed feature extraction. By learning to
replicate the process, the neural network integrates previous research
knowledge and learns to model visual objects in a way similar to the
hand-designed features. In the succeeding finetuning step, it further learns
object-specific representations from labeled data and this boosts its
classification power. We pretrain two convolutional neural networks where one
replicates the process of histogram of oriented gradients feature extraction,
and the other replicates the process of region covariance feature extraction.
After finetuning, we achieve substantially better performance than the baseline
methods.Comment: 9 pages, 11 figures, WACV 2016: IEEE Conference on Applications of
Computer Visio
One-Class Classification: Taxonomy of Study and Review of Techniques
One-class classification (OCC) algorithms aim to build classification models
when the negative class is either absent, poorly sampled or not well defined.
This unique situation constrains the learning of efficient classifiers by
defining class boundary just with the knowledge of positive class. The OCC
problem has been considered and applied under many research themes, such as
outlier/novelty detection and concept learning. In this paper we present a
unified view of the general problem of OCC by presenting a taxonomy of study
for OCC problems, which is based on the availability of training data,
algorithms used and the application domains applied. We further delve into each
of the categories of the proposed taxonomy and present a comprehensive
literature review of the OCC algorithms, techniques and methodologies with a
focus on their significance, limitations and applications. We conclude our
paper by discussing some open research problems in the field of OCC and present
our vision for future research.Comment: 24 pages + 11 pages of references, 8 figure
A survey of outlier detection methodologies
Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review
- …