2,002 research outputs found
A distributed agent architecture for real-time knowledge-based systems: Real-time expert systems project, phase 1
We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control
Recommended from our members
Formalising Engineering Judgement on Software Dependability via Belief Networks
Design and implementation for automated network troubleshooting using data mining
The efficient and effective monitoring of mobile networks is vital given the
number of users who rely on such networks and the importance of those networks.
The purpose of this paper is to present a monitoring scheme for mobile networks
based on the use of rules and decision tree data mining classifiers to upgrade
fault detection and handling. The goal is to have optimisation rules that
improve anomaly detection. In addition, a monitoring scheme that relies on
Bayesian classifiers was also implemented for the purpose of fault isolation
and localisation. The data mining techniques described in this paper are
intended to allow a system to be trained to actually learn network fault rules.
The results of the tests that were conducted allowed for the conclusion that
the rules were highly effective to improve network troubleshooting.Comment: 19 pages, 7 figures, International Journal of Data Mining & Knowledge
Management Process (IJDKP) Vol.5, No.3, May 201
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
A Three-Part Bayesian Network for Modeling Dwelling Fires and Their Impact upon People and Property.
In the United Kingdom, dwelling fires are responsible for the majority of all fire-related fatalities. The development of these incidents involves the interaction of a multitude of variables that combine in many different ways. Consequently, assessment of dwelling fire risk can be complex, which often results in ambiguity during fire safety planning and decision making. In this article, a three-part Bayesian network model is proposed to study dwelling fires from ignition through to extinguishment in order to improve confidence in dwelling fire safety assessment. The model incorporates both hard and soft data, delivering posterior probabilities for selected outcomes. Case studies demonstrate how the model functions and provide evidence of its use for planning and accident investigation
Establishment of a novel predictive reliability assessment strategy for ship machinery
There is no doubt that recent years, maritime industry is moving forward to novel and sophisticated inspection and maintenance practices. Nowadays maintenance is encountered as an operational method, which can be employed both as a profit generating process and a cost reduction budget centre through an enhanced Operation and Maintenance (O&M) strategy. In the first place, a flexible framework to be applicable on complex system level of machinery can be introduced towards ship maintenance scheduling of systems, subsystems and components.;This holistic inspection and maintenance notion should be implemented by integrating different strategies, methodologies, technologies and tools, suitably selected by fulfilling the requirements of the selected ship systems. In this thesis, an innovative maintenance strategy for ship machinery is proposed, namely the Probabilistic Machinery Reliability Assessment (PMRA) strategy focusing towards the reliability and safety enhancement of main systems, subsystems and maintainable units and components.;In this respect, the combination of a data mining method (k-means), the manufacturer safety aspects, the dynamic state modelling (Markov Chains), the probabilistic predictive reliability assessment (Bayesian Belief Networks) and the qualitative decision making (Failure Modes and Effects Analysis) is employed encompassing the benefits of qualitative and quantitative reliability assessment. PMRA has been clearly demonstrated in two case studies applied on offshore platform oil and gas and selected ship machinery.;The results are used to identify the most unreliability systems, subsystems and components, while advising suitable practical inspection and maintenance activities. The proposed PMRA strategy is also tested in a flexible sensitivity analysis scheme.There is no doubt that recent years, maritime industry is moving forward to novel and sophisticated inspection and maintenance practices. Nowadays maintenance is encountered as an operational method, which can be employed both as a profit generating process and a cost reduction budget centre through an enhanced Operation and Maintenance (O&M) strategy. In the first place, a flexible framework to be applicable on complex system level of machinery can be introduced towards ship maintenance scheduling of systems, subsystems and components.;This holistic inspection and maintenance notion should be implemented by integrating different strategies, methodologies, technologies and tools, suitably selected by fulfilling the requirements of the selected ship systems. In this thesis, an innovative maintenance strategy for ship machinery is proposed, namely the Probabilistic Machinery Reliability Assessment (PMRA) strategy focusing towards the reliability and safety enhancement of main systems, subsystems and maintainable units and components.;In this respect, the combination of a data mining method (k-means), the manufacturer safety aspects, the dynamic state modelling (Markov Chains), the probabilistic predictive reliability assessment (Bayesian Belief Networks) and the qualitative decision making (Failure Modes and Effects Analysis) is employed encompassing the benefits of qualitative and quantitative reliability assessment. PMRA has been clearly demonstrated in two case studies applied on offshore platform oil and gas and selected ship machinery.;The results are used to identify the most unreliability systems, subsystems and components, while advising suitable practical inspection and maintenance activities. The proposed PMRA strategy is also tested in a flexible sensitivity analysis scheme
- …