23,679 research outputs found
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
Driver Distraction Identification with an Ensemble of Convolutional Neural Networks
The World Health Organization (WHO) reported 1.25 million deaths yearly due
to road traffic accidents worldwide and the number has been continuously
increasing over the last few years. Nearly fifth of these accidents are caused
by distracted drivers. Existing work of distracted driver detection is
concerned with a small set of distractions (mostly, cell phone usage).
Unreliable ad-hoc methods are often used.In this paper, we present the first
publicly available dataset for driver distraction identification with more
distraction postures than existing alternatives. In addition, we propose a
reliable deep learning-based solution that achieves a 90% accuracy. The system
consists of a genetically-weighted ensemble of convolutional neural networks,
we show that a weighted ensemble of classifiers using a genetic algorithm
yields in a better classification confidence. We also study the effect of
different visual elements in distraction detection by means of face and hand
localizations, and skin segmentation. Finally, we present a thinned version of
our ensemble that could achieve 84.64% classification accuracy and operate in a
real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949
A Novel Multiobjective Cell Switch-Off Framework for Cellular Networks
Cell Switch-Off (CSO) is recognized as a promising approach to reduce the
energy consumption in next-generation cellular networks. However, CSO poses
serious challenges not only from the resource allocation perspective but also
from the implementation point of view. Indeed, CSO represents a difficult
optimization problem due to its NP-complete nature. Moreover, there are a
number of important practical limitations in the implementation of CSO schemes,
such as the need for minimizing the real-time complexity and the number of
on-off/off-on transitions and CSO-induced handovers. This article introduces a
novel approach to CSO based on multiobjective optimization that makes use of
the statistical description of the service demand (known by operators). In
addition, downlink and uplink coverage criteria are included and a comparative
analysis between different models to characterize intercell interference is
also presented to shed light on their impact on CSO. The framework
distinguishes itself from other proposals in two ways: 1) The number of
on-off/off-on transitions as well as handovers are minimized, and 2) the
computationally-heavy part of the algorithm is executed offline, which makes
its implementation feasible. The results show that the proposed scheme achieves
substantial energy savings in small cell deployments where service demand is
not uniformly distributed, without compromising the Quality-of-Service (QoS) or
requiring heavy real-time processing
A Machine-Synesthetic Approach To DDoS Network Attack Detection
In the authors' opinion, anomaly detection systems, or ADS, seem to be the
most perspective direction in the subject of attack detection, because these
systems can detect, among others, the unknown (zero-day) attacks. To detect
anomalies, the authors propose to use machine synesthesia. In this case,
machine synesthesia is understood as an interface that allows using image
classification algorithms in the problem of detecting network anomalies, making
it possible to use non-specialized image detection methods that have recently
been widely and actively developed. The proposed approach is that the network
traffic data is "projected" into the image. It can be seen from the
experimental results that the proposed method for detecting anomalies shows
high results in the detection of attacks. On a large sample, the value of the
complex efficiency indicator reaches 97%.Comment: 12 pages, 2 figures, 5 tables. Accepted to the Intelligent Systems
Conference (IntelliSys) 201
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
A brief network analysis of Artificial Intelligence publication
In this paper, we present an illustration to the history of Artificial
Intelligence(AI) with a statistical analysis of publish since 1940. We
collected and mined through the IEEE publish data base to analysis the
geological and chronological variance of the activeness of research in AI. The
connections between different institutes are showed. The result shows that the
leading community of AI research are mainly in the USA, China, the Europe and
Japan. The key institutes, authors and the research hotspots are revealed. It
is found that the research institutes in the fields like Data Mining, Computer
Vision, Pattern Recognition and some other fields of Machine Learning are quite
consistent, implying a strong interaction between the community of each field.
It is also showed that the research of Electronic Engineering and Industrial or
Commercial applications are very active in California. Japan is also publishing
a lot of papers in robotics. Due to the limitation of data source, the result
might be overly influenced by the number of published articles, which is to our
best improved by applying network keynode analysis on the research community
instead of merely count the number of publish.Comment: 18 pages, 7 figure
- …