6,097 research outputs found
A graph oriented approach for network forensic analysis
Network forensic analysis is a process that analyzes intrusion evidence captured from networked environment to identify suspicious entities and stepwise actions in an attack scenario. Unfortunately, the overwhelming amount and low quality of output from security sensors make it difficult for analysts to obtain a succinct high-level view of complex multi-stage intrusions.
This dissertation presents a novel graph based network forensic analysis system. The evidence graph model provides an intuitive representation of collected evidence as well as the foundation for forensic analysis. Based on the evidence graph, we develop a set of analysis components in a hierarchical reasoning framework. Local reasoning utilizes fuzzy inference to infer the functional states of an host level entity from its local observations. Global reasoning performs graph structure analysis to identify the set of highly correlated hosts that belong to the coordinated attack scenario. In global reasoning, we apply spectral clustering and Pagerank methods for generic and targeted investigation
respectively. An interactive hypothesis testing procedure is developed to identify hidden attackers from non-explicit-malicious evidence. Finally, we introduce the notion of target-oriented effective event sequence (TOEES) to semantically reconstruct stealthy attack scenarios with less dependency on ad-hoc expert knowledge. Well established computation methods used in our approach provide the scalability needed to perform
post-incident analysis in large networks. We evaluate the techniques with a number of intrusion detection datasets and the experiment results show that our approach is effective in identifying complex multi-stage attacks
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
PAC: A Novel Self-Adaptive Neuro-Fuzzy Controller for Micro Aerial Vehicles
There exists an increasing demand for a flexible and computationally
efficient controller for micro aerial vehicles (MAVs) due to a high degree of
environmental perturbations. In this work, an evolving neuro-fuzzy controller,
namely Parsimonious Controller (PAC) is proposed. It features fewer network
parameters than conventional approaches due to the absence of rule premise
parameters. PAC is built upon a recently developed evolving neuro-fuzzy system
known as parsimonious learning machine (PALM) and adopts new rule growing and
pruning modules derived from the approximation of bias and variance. These rule
adaptation methods have no reliance on user-defined thresholds, thereby
increasing the PAC's autonomy for real-time deployment. PAC adapts the
consequent parameters with the sliding mode control (SMC) theory in the
single-pass fashion. The boundedness and convergence of the closed-loop control
system's tracking error and the controller's consequent parameters are
confirmed by utilizing the LaSalle-Yoshizawa theorem. Lastly, the controller's
efficacy is evaluated by observing various trajectory tracking performance from
a bio-inspired flapping-wing micro aerial vehicle (BI-FWMAV) and a rotary wing
micro aerial vehicle called hexacopter. Furthermore, it is compared to three
distinctive controllers. Our PAC outperforms the linear PID controller and
feed-forward neural network (FFNN) based nonlinear adaptive controller.
Compared to its predecessor, G-controller, the tracking accuracy is comparable,
but the PAC incurs significantly fewer parameters to attain similar or better
performance than the G-controller.Comment: This paper has been accepted for publication in Information Science
Journal 201
Utilizing Analytical Hierarchy Process for Pauper House Programme in Malaysia
In Malaysia, the selection and evaluation of candidates for
Pauper House Programme (PHP) are done manually. In
this paper, a technique based on Analytical Hierarchy
Technique (AHP) is designed and developed in order to
make an evaluation and selection of PHP application. The
aim is to ensure the selection process is more precise,
accurate and can avoid any biasness issue. This technique
is studied and designed based on the Pauper assessment
technique from one of district offices in Malaysia. A
hierarchical indexes are designed based on the criteria that
been used in the official form of PHP application. A
number of 23 samples of data which had been endorsed
by Exco of State in Malaysia are used to test this
technique. Furthermore the comparison of those two
methods are given in this paper. All the calculations of
this technique are done in a software namely Expert
Choice version 11.5. By comparing the manual and AHP
shows that there are three (3) samples that are not
qualified. The developed technique also satisfies in term
of ease of accuracy and preciseness but need a further
study due to some limitation as explained in the
recommendation of this paper
A Systematic Review of Learning based Notion Change Acceptance Strategies for Incremental Mining
The data generated contemporarily from different communication environments is dynamic in content different from the earlier static data environments. The high speed streams have huge digital data transmitted with rapid context changes unlike static environments where the data is mostly stationery. The process of extracting, classifying, and exploring relevant information from enormous flowing and high speed varying streaming data has several inapplicable issues when static data based strategies are applied. The learning strategies of static data are based on observable and established notion changes for exploring the data whereas in high speed data streams there are no fixed rules or drift strategies existing beforehand and the classification mechanisms have to develop their own learning schemes in terms of the notion changes and Notion Change Acceptance by changing the existing notion, or substituting the existing notion, or creating new notions with evaluation in the classification process in terms of the previous, existing, and the newer incoming notions. The research in this field has devised numerous data stream mining strategies for determining, predicting, and establishing the notion changes in the process of exploring and accurately predicting the next notion change occurrences in Notion Change. In this context of feasible relevant better knowledge discovery in this paper we have given an illustration with nomenclature of various contemporarily affirmed models of benchmark in data stream mining for adapting the Notion Change
Improving intrusion detection systems using data mining techniques
Recent surveys and studies have shown that cyber-attacks have caused a
lot of damage to organisations, governments, and individuals around the world.
Although developments are constantly occurring in the computer security field,
cyber-attacks still cause damage as they are developed and evolved by
hackers. This research looked at some industrial challenges in the intrusion
detection area. The research identified two main challenges; the first one is that
signature-based intrusion detection systems such as SNORT lack the capability of
detecting attacks with new signatures without human intervention. The other
challenge is related to multi-stage attack detection, it has been found that
signature-based is not efficient in this area. The novelty in this research is
presented through developing methodologies tackling the mentioned challenges.
The first challenge was handled by developing a multi-layer classification
methodology. The first layer is based on decision tree, while the second layer is a
hybrid module that uses two data mining techniques; neural network, and fuzzy
logic. The second layer will try to detect new attacks in case the first one fails to
detect. This system detects attacks with new signatures, and then updates the
SNORT signature holder automatically, without any human intervention. The
obtained results have shown that a high detection rate has been obtained with
attacks having new signatures. However, it has been found that the false positive
rate needs to be lowered. The second challenge was approached by evaluating IP
information using fuzzy logic. This approach looks at the identity of participants
in the traffic, rather than the sequence and contents of the traffic. The results have
shown that this approach can help in predicting attacks at very early stages in
some scenarios. However, it has been found that combining this approach with a
different approach that looks at the sequence and contents of the traffic, such as
event- correlation, will achieve a better performance than each approach
individually
- …